00:00:00.001 Started by upstream project "autotest-per-patch" build number 132031 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.068 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.069 The recommended git tool is: git 00:00:00.069 using credential 00000000-0000-0000-0000-000000000002 00:00:00.070 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.145 Fetching changes from the remote Git repository 00:00:00.146 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.221 Using shallow fetch with depth 1 00:00:00.221 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.221 > git --version # timeout=10 00:00:00.297 > git --version # 'git version 2.39.2' 00:00:00.297 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.341 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.341 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.374 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.387 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.402 Checking out Revision 71582ff3be096f9d5ed302be37c05572278bd285 (FETCH_HEAD) 00:00:07.402 > git config core.sparsecheckout # timeout=10 00:00:07.413 > git read-tree -mu HEAD # timeout=10 00:00:07.430 > git checkout -f 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=5 00:00:07.452 Commit message: "jenkins/jjb-config: Add SPDK_TEST_NVME_INTERRUPT to nvme-phy job" 00:00:07.452 > git rev-list --no-walk 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=10 00:00:07.544 [Pipeline] Start of Pipeline 00:00:07.555 [Pipeline] library 00:00:07.556 Loading library shm_lib@master 00:00:07.556 Library shm_lib@master is cached. Copying from home. 00:00:07.570 [Pipeline] node 00:00:22.572 Still waiting to schedule task 00:00:22.573 Waiting for next available executor on ‘vagrant-vm-host’ 00:07:39.330 Running on VM-host-SM38 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:07:39.332 [Pipeline] { 00:07:39.345 [Pipeline] catchError 00:07:39.347 [Pipeline] { 00:07:39.364 [Pipeline] wrap 00:07:39.373 [Pipeline] { 00:07:39.381 [Pipeline] stage 00:07:39.383 [Pipeline] { (Prologue) 00:07:39.403 [Pipeline] echo 00:07:39.405 Node: VM-host-SM38 00:07:39.412 [Pipeline] cleanWs 00:07:39.422 [WS-CLEANUP] Deleting project workspace... 00:07:39.422 [WS-CLEANUP] Deferred wipeout is used... 00:07:39.430 [WS-CLEANUP] done 00:07:39.634 [Pipeline] setCustomBuildProperty 00:07:39.726 [Pipeline] httpRequest 00:07:40.124 [Pipeline] echo 00:07:40.126 Sorcerer 10.211.164.101 is alive 00:07:40.138 [Pipeline] retry 00:07:40.140 [Pipeline] { 00:07:40.154 [Pipeline] httpRequest 00:07:40.159 HttpMethod: GET 00:07:40.160 URL: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:07:40.161 Sending request to url: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:07:40.162 Response Code: HTTP/1.1 200 OK 00:07:40.163 Success: Status code 200 is in the accepted range: 200,404 00:07:40.163 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:07:40.452 [Pipeline] } 00:07:40.470 [Pipeline] // retry 00:07:40.478 [Pipeline] sh 00:07:40.762 + tar --no-same-owner -xf jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:07:40.778 [Pipeline] httpRequest 00:07:41.173 [Pipeline] echo 00:07:41.175 Sorcerer 10.211.164.101 is alive 00:07:41.185 [Pipeline] retry 00:07:41.187 [Pipeline] { 00:07:41.202 [Pipeline] httpRequest 00:07:41.207 HttpMethod: GET 00:07:41.208 URL: http://10.211.164.101/packages/spdk_6e713f9c6837550dfd82bcef8afb7eb10a46b865.tar.gz 00:07:41.208 Sending request to url: http://10.211.164.101/packages/spdk_6e713f9c6837550dfd82bcef8afb7eb10a46b865.tar.gz 00:07:41.210 Response Code: HTTP/1.1 200 OK 00:07:41.210 Success: Status code 200 is in the accepted range: 200,404 00:07:41.211 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_6e713f9c6837550dfd82bcef8afb7eb10a46b865.tar.gz 00:07:43.486 [Pipeline] } 00:07:43.500 [Pipeline] // retry 00:07:43.508 [Pipeline] sh 00:07:43.795 + tar --no-same-owner -xf spdk_6e713f9c6837550dfd82bcef8afb7eb10a46b865.tar.gz 00:07:47.107 [Pipeline] sh 00:07:47.391 + git -C spdk log --oneline -n5 00:07:47.391 6e713f9c6 lib/rdma_provider: Add API to check if accel seq supported 00:07:47.391 477ec7110 lib/mlx5: Add API to check if UMR registration supported 00:07:47.391 8ee9fa114 accel/mlx5: Merge crypto+copy to reg UMR 00:07:47.391 ce6a621c4 accel/mlx5: Initial implementation of mlx5 platform driver 00:07:47.391 61de1ff17 nvme/nvme: Factor out submit_request function 00:07:47.412 [Pipeline] writeFile 00:07:47.428 [Pipeline] sh 00:07:47.753 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:07:47.766 [Pipeline] sh 00:07:48.052 + cat autorun-spdk.conf 00:07:48.052 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:48.052 SPDK_TEST_NVMF=1 00:07:48.052 SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:48.052 SPDK_TEST_URING=1 00:07:48.052 SPDK_TEST_USDT=1 00:07:48.052 SPDK_RUN_UBSAN=1 00:07:48.052 NET_TYPE=virt 00:07:48.052 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:48.061 RUN_NIGHTLY=0 00:07:48.064 [Pipeline] } 00:07:48.080 [Pipeline] // stage 00:07:48.101 [Pipeline] stage 00:07:48.104 [Pipeline] { (Run VM) 00:07:48.118 [Pipeline] sh 00:07:48.405 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:07:48.405 + echo 'Start stage prepare_nvme.sh' 00:07:48.405 Start stage prepare_nvme.sh 00:07:48.405 + [[ -n 4 ]] 00:07:48.405 + disk_prefix=ex4 00:07:48.405 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:07:48.405 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:07:48.405 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:07:48.405 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:48.405 ++ SPDK_TEST_NVMF=1 00:07:48.405 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:48.405 ++ SPDK_TEST_URING=1 00:07:48.405 ++ SPDK_TEST_USDT=1 00:07:48.405 ++ SPDK_RUN_UBSAN=1 00:07:48.405 ++ NET_TYPE=virt 00:07:48.405 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:48.405 ++ RUN_NIGHTLY=0 00:07:48.405 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:07:48.405 + nvme_files=() 00:07:48.405 + declare -A nvme_files 00:07:48.405 + backend_dir=/var/lib/libvirt/images/backends 00:07:48.405 + nvme_files['nvme.img']=5G 00:07:48.405 + nvme_files['nvme-cmb.img']=5G 00:07:48.405 + nvme_files['nvme-multi0.img']=4G 00:07:48.405 + nvme_files['nvme-multi1.img']=4G 00:07:48.405 + nvme_files['nvme-multi2.img']=4G 00:07:48.405 + nvme_files['nvme-openstack.img']=8G 00:07:48.405 + nvme_files['nvme-zns.img']=5G 00:07:48.405 + (( SPDK_TEST_NVME_PMR == 1 )) 00:07:48.405 + (( SPDK_TEST_FTL == 1 )) 00:07:48.405 + (( SPDK_TEST_NVME_FDP == 1 )) 00:07:48.405 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:07:48.405 + for nvme in "${!nvme_files[@]}" 00:07:48.405 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:07:48.405 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:07:48.405 + for nvme in "${!nvme_files[@]}" 00:07:48.405 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:07:48.405 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:07:48.405 + for nvme in "${!nvme_files[@]}" 00:07:48.405 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:07:48.405 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:07:48.405 + for nvme in "${!nvme_files[@]}" 00:07:48.405 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:07:48.665 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:07:48.665 + for nvme in "${!nvme_files[@]}" 00:07:48.665 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:07:48.665 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:07:48.665 + for nvme in "${!nvme_files[@]}" 00:07:48.665 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:07:48.665 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:07:48.665 + for nvme in "${!nvme_files[@]}" 00:07:48.665 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:07:48.665 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:07:48.665 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:07:48.665 + echo 'End stage prepare_nvme.sh' 00:07:48.665 End stage prepare_nvme.sh 00:07:48.677 [Pipeline] sh 00:07:48.997 + DISTRO=fedora39 00:07:48.997 + CPUS=10 00:07:48.997 + RAM=12288 00:07:48.997 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:07:48.997 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:07:48.997 00:07:48.997 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:07:48.997 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:07:48.997 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:07:48.997 HELP=0 00:07:48.997 DRY_RUN=0 00:07:48.997 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:07:48.997 NVME_DISKS_TYPE=nvme,nvme, 00:07:48.997 NVME_AUTO_CREATE=0 00:07:48.997 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:07:48.997 NVME_CMB=,, 00:07:48.997 NVME_PMR=,, 00:07:48.997 NVME_ZNS=,, 00:07:48.997 NVME_MS=,, 00:07:48.997 NVME_FDP=,, 00:07:48.997 SPDK_VAGRANT_DISTRO=fedora39 00:07:48.997 SPDK_VAGRANT_VMCPU=10 00:07:48.997 SPDK_VAGRANT_VMRAM=12288 00:07:48.997 SPDK_VAGRANT_PROVIDER=libvirt 00:07:48.997 SPDK_VAGRANT_HTTP_PROXY= 00:07:48.997 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:07:48.997 SPDK_OPENSTACK_NETWORK=0 00:07:48.997 VAGRANT_PACKAGE_BOX=0 00:07:48.997 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:07:48.997 FORCE_DISTRO=true 00:07:48.997 VAGRANT_BOX_VERSION= 00:07:48.997 EXTRA_VAGRANTFILES= 00:07:48.997 NIC_MODEL=e1000 00:07:48.997 00:07:48.997 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:07:48.997 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:07:51.539 Bringing machine 'default' up with 'libvirt' provider... 00:07:51.798 ==> default: Creating image (snapshot of base box volume). 00:07:51.798 ==> default: Creating domain with the following settings... 00:07:51.798 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730730840_3e0a474167a930d03254 00:07:51.798 ==> default: -- Domain type: kvm 00:07:51.798 ==> default: -- Cpus: 10 00:07:51.798 ==> default: -- Feature: acpi 00:07:51.798 ==> default: -- Feature: apic 00:07:51.798 ==> default: -- Feature: pae 00:07:51.798 ==> default: -- Memory: 12288M 00:07:51.798 ==> default: -- Memory Backing: hugepages: 00:07:51.798 ==> default: -- Management MAC: 00:07:51.798 ==> default: -- Loader: 00:07:51.798 ==> default: -- Nvram: 00:07:51.798 ==> default: -- Base box: spdk/fedora39 00:07:51.798 ==> default: -- Storage pool: default 00:07:51.798 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730730840_3e0a474167a930d03254.img (20G) 00:07:51.798 ==> default: -- Volume Cache: default 00:07:51.798 ==> default: -- Kernel: 00:07:51.798 ==> default: -- Initrd: 00:07:51.798 ==> default: -- Graphics Type: vnc 00:07:51.798 ==> default: -- Graphics Port: -1 00:07:51.798 ==> default: -- Graphics IP: 127.0.0.1 00:07:51.798 ==> default: -- Graphics Password: Not defined 00:07:51.798 ==> default: -- Video Type: cirrus 00:07:51.798 ==> default: -- Video VRAM: 9216 00:07:51.798 ==> default: -- Sound Type: 00:07:51.798 ==> default: -- Keymap: en-us 00:07:51.798 ==> default: -- TPM Path: 00:07:51.798 ==> default: -- INPUT: type=mouse, bus=ps2 00:07:51.798 ==> default: -- Command line args: 00:07:51.798 ==> default: -> value=-device, 00:07:51.798 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:07:51.798 ==> default: -> value=-drive, 00:07:51.798 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:07:51.798 ==> default: -> value=-device, 00:07:51.798 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:51.798 ==> default: -> value=-device, 00:07:51.798 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:07:51.798 ==> default: -> value=-drive, 00:07:51.798 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:07:51.798 ==> default: -> value=-device, 00:07:51.798 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:51.798 ==> default: -> value=-drive, 00:07:51.798 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:07:51.798 ==> default: -> value=-device, 00:07:51.798 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:51.798 ==> default: -> value=-drive, 00:07:51.798 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:07:51.798 ==> default: -> value=-device, 00:07:51.798 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:52.055 ==> default: Creating shared folders metadata... 00:07:52.055 ==> default: Starting domain. 00:07:52.990 ==> default: Waiting for domain to get an IP address... 00:08:11.084 ==> default: Waiting for SSH to become available... 00:08:11.084 ==> default: Configuring and enabling network interfaces... 00:08:14.385 default: SSH address: 192.168.121.204:22 00:08:14.385 default: SSH username: vagrant 00:08:14.385 default: SSH auth method: private key 00:08:16.296 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:08:26.315 ==> default: Mounting SSHFS shared folder... 00:08:27.259 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:08:27.259 ==> default: Checking Mount.. 00:08:28.210 ==> default: Folder Successfully Mounted! 00:08:28.210 00:08:28.210 SUCCESS! 00:08:28.210 00:08:28.210 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:08:28.210 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:08:28.210 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:08:28.210 00:08:28.219 [Pipeline] } 00:08:28.234 [Pipeline] // stage 00:08:28.244 [Pipeline] dir 00:08:28.245 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:08:28.247 [Pipeline] { 00:08:28.259 [Pipeline] catchError 00:08:28.261 [Pipeline] { 00:08:28.273 [Pipeline] sh 00:08:28.555 + vagrant ssh-config --host vagrant 00:08:28.555 + sed -ne '/^Host/,$p' 00:08:28.555 + tee ssh_conf 00:08:31.098 Host vagrant 00:08:31.098 HostName 192.168.121.204 00:08:31.098 User vagrant 00:08:31.098 Port 22 00:08:31.098 UserKnownHostsFile /dev/null 00:08:31.098 StrictHostKeyChecking no 00:08:31.098 PasswordAuthentication no 00:08:31.098 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:08:31.098 IdentitiesOnly yes 00:08:31.098 LogLevel FATAL 00:08:31.098 ForwardAgent yes 00:08:31.098 ForwardX11 yes 00:08:31.098 00:08:31.114 [Pipeline] withEnv 00:08:31.116 [Pipeline] { 00:08:31.129 [Pipeline] sh 00:08:31.413 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:08:31.413 source /etc/os-release 00:08:31.413 [[ -e /image.version ]] && img=$(< /image.version) 00:08:31.413 # Minimal, systemd-like check. 00:08:31.413 if [[ -e /.dockerenv ]]; then 00:08:31.413 # Clear garbage from the node'\''s name: 00:08:31.413 # agt-er_autotest_547-896 -> autotest_547-896 00:08:31.413 # $HOSTNAME is the actual container id 00:08:31.413 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:08:31.413 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:08:31.413 # We can assume this is a mount from a host where container is running, 00:08:31.413 # so fetch its hostname to easily identify the target swarm worker. 00:08:31.413 container="$(< /etc/hostname) ($agent)" 00:08:31.413 else 00:08:31.413 # Fallback 00:08:31.413 container=$agent 00:08:31.413 fi 00:08:31.413 fi 00:08:31.413 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:08:31.413 ' 00:08:31.427 [Pipeline] } 00:08:31.443 [Pipeline] // withEnv 00:08:31.454 [Pipeline] setCustomBuildProperty 00:08:31.468 [Pipeline] stage 00:08:31.471 [Pipeline] { (Tests) 00:08:31.488 [Pipeline] sh 00:08:31.774 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:08:32.049 [Pipeline] sh 00:08:32.354 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:08:32.369 [Pipeline] timeout 00:08:32.370 Timeout set to expire in 1 hr 0 min 00:08:32.372 [Pipeline] { 00:08:32.384 [Pipeline] sh 00:08:32.667 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:08:32.927 HEAD is now at 6e713f9c6 lib/rdma_provider: Add API to check if accel seq supported 00:08:33.202 [Pipeline] sh 00:08:33.486 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:08:33.799 [Pipeline] sh 00:08:34.083 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:08:34.362 [Pipeline] sh 00:08:34.643 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo' 00:08:34.643 ++ readlink -f spdk_repo 00:08:34.643 + DIR_ROOT=/home/vagrant/spdk_repo 00:08:34.643 + [[ -n /home/vagrant/spdk_repo ]] 00:08:34.643 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:08:34.643 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:08:34.643 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:08:34.643 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:08:34.643 + [[ -d /home/vagrant/spdk_repo/output ]] 00:08:34.643 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:08:34.643 + cd /home/vagrant/spdk_repo 00:08:34.643 + source /etc/os-release 00:08:34.643 ++ NAME='Fedora Linux' 00:08:34.643 ++ VERSION='39 (Cloud Edition)' 00:08:34.643 ++ ID=fedora 00:08:34.643 ++ VERSION_ID=39 00:08:34.643 ++ VERSION_CODENAME= 00:08:34.643 ++ PLATFORM_ID=platform:f39 00:08:34.644 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:08:34.644 ++ ANSI_COLOR='0;38;2;60;110;180' 00:08:34.644 ++ LOGO=fedora-logo-icon 00:08:34.644 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:08:34.644 ++ HOME_URL=https://fedoraproject.org/ 00:08:34.644 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:08:34.644 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:08:34.644 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:08:34.644 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:08:34.644 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:08:34.644 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:08:34.644 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:08:34.644 ++ SUPPORT_END=2024-11-12 00:08:34.644 ++ VARIANT='Cloud Edition' 00:08:34.644 ++ VARIANT_ID=cloud 00:08:34.644 + uname -a 00:08:34.644 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:08:34.644 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:35.214 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:35.214 Hugepages 00:08:35.214 node hugesize free / total 00:08:35.214 node0 1048576kB 0 / 0 00:08:35.214 node0 2048kB 0 / 0 00:08:35.214 00:08:35.214 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:35.214 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:35.214 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:35.214 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:35.214 + rm -f /tmp/spdk-ld-path 00:08:35.214 + source autorun-spdk.conf 00:08:35.214 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:35.214 ++ SPDK_TEST_NVMF=1 00:08:35.214 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:35.214 ++ SPDK_TEST_URING=1 00:08:35.214 ++ SPDK_TEST_USDT=1 00:08:35.214 ++ SPDK_RUN_UBSAN=1 00:08:35.214 ++ NET_TYPE=virt 00:08:35.214 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:35.214 ++ RUN_NIGHTLY=0 00:08:35.214 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:08:35.214 + [[ -n '' ]] 00:08:35.214 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:08:35.214 + for M in /var/spdk/build-*-manifest.txt 00:08:35.214 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:08:35.214 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:35.214 + for M in /var/spdk/build-*-manifest.txt 00:08:35.214 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:08:35.214 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:35.214 + for M in /var/spdk/build-*-manifest.txt 00:08:35.214 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:08:35.214 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:35.214 ++ uname 00:08:35.214 + [[ Linux == \L\i\n\u\x ]] 00:08:35.214 + sudo dmesg -T 00:08:35.474 + sudo dmesg --clear 00:08:35.474 + dmesg_pid=5002 00:08:35.474 + [[ Fedora Linux == FreeBSD ]] 00:08:35.474 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:35.474 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:35.474 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:08:35.474 + sudo dmesg -Tw 00:08:35.474 + [[ -x /usr/src/fio-static/fio ]] 00:08:35.474 + export FIO_BIN=/usr/src/fio-static/fio 00:08:35.474 + FIO_BIN=/usr/src/fio-static/fio 00:08:35.474 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:08:35.474 + [[ ! -v VFIO_QEMU_BIN ]] 00:08:35.474 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:08:35.474 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:35.474 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:35.474 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:08:35.474 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:35.474 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:35.474 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:38.109 14:34:46 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:08:38.109 14:34:46 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:38.109 14:34:46 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:38.109 14:34:46 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:08:38.109 14:34:46 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:38.109 14:34:46 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:08:38.109 14:34:46 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:08:38.109 14:34:46 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:08:38.109 14:34:46 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:08:38.109 14:34:46 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:38.109 14:34:46 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:08:38.109 14:34:46 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:08:38.109 14:34:46 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:38.109 14:34:46 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:08:38.109 14:34:46 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.109 14:34:46 -- scripts/common.sh@15 -- $ shopt -s extglob 00:08:38.109 14:34:46 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:38.109 14:34:46 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.109 14:34:46 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.109 14:34:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.109 14:34:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.109 14:34:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.109 14:34:46 -- paths/export.sh@5 -- $ export PATH 00:08:38.109 14:34:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.109 14:34:46 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:08:38.109 14:34:46 -- common/autobuild_common.sh@486 -- $ date +%s 00:08:38.109 14:34:46 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730730886.XXXXXX 00:08:38.109 14:34:46 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730730886.HejxWP 00:08:38.109 14:34:46 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:08:38.109 14:34:46 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:08:38.109 14:34:46 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:08:38.109 14:34:46 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:08:38.109 14:34:46 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:08:38.109 14:34:46 -- common/autobuild_common.sh@502 -- $ get_config_params 00:08:38.109 14:34:46 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:08:38.109 14:34:46 -- common/autotest_common.sh@10 -- $ set +x 00:08:38.109 14:34:46 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:08:38.109 14:34:46 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:08:38.109 14:34:46 -- pm/common@17 -- $ local monitor 00:08:38.109 14:34:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:38.109 14:34:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:38.109 14:34:46 -- pm/common@25 -- $ sleep 1 00:08:38.109 14:34:46 -- pm/common@21 -- $ date +%s 00:08:38.109 14:34:46 -- pm/common@21 -- $ date +%s 00:08:38.109 14:34:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730730886 00:08:38.109 14:34:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730730886 00:08:38.109 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730730886_collect-vmstat.pm.log 00:08:38.109 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730730886_collect-cpu-load.pm.log 00:08:39.052 14:34:47 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:08:39.052 14:34:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:08:39.052 14:34:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:08:39.052 14:34:47 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:08:39.052 14:34:47 -- spdk/autobuild.sh@16 -- $ date -u 00:08:39.052 Mon Nov 4 02:34:47 PM UTC 2024 00:08:39.052 14:34:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:08:39.052 v25.01-pre-169-g6e713f9c6 00:08:39.052 14:34:47 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:08:39.052 14:34:47 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:08:39.052 14:34:47 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:08:39.052 14:34:47 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:08:39.052 14:34:47 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:08:39.052 14:34:47 -- common/autotest_common.sh@10 -- $ set +x 00:08:39.052 ************************************ 00:08:39.052 START TEST ubsan 00:08:39.052 ************************************ 00:08:39.052 using ubsan 00:08:39.052 ************************************ 00:08:39.052 END TEST ubsan 00:08:39.052 ************************************ 00:08:39.052 14:34:47 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:08:39.052 00:08:39.052 real 0m0.000s 00:08:39.052 user 0m0.000s 00:08:39.052 sys 0m0.000s 00:08:39.052 14:34:47 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:08:39.052 14:34:47 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:08:39.052 14:34:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:08:39.052 14:34:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:08:39.052 14:34:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:08:39.052 14:34:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:08:39.052 14:34:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:08:39.052 14:34:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:08:39.052 14:34:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:08:39.052 14:34:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:08:39.052 14:34:48 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:08:39.052 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:39.052 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:08:39.626 Using 'verbs' RDMA provider 00:08:52.489 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:09:02.507 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:09:02.507 Creating mk/config.mk...done. 00:09:02.507 Creating mk/cc.flags.mk...done. 00:09:02.507 Type 'make' to build. 00:09:02.507 14:35:10 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:09:02.507 14:35:10 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:09:02.507 14:35:10 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:09:02.507 14:35:10 -- common/autotest_common.sh@10 -- $ set +x 00:09:02.507 ************************************ 00:09:02.507 START TEST make 00:09:02.507 ************************************ 00:09:02.507 14:35:10 make -- common/autotest_common.sh@1127 -- $ make -j10 00:09:02.507 make[1]: Nothing to be done for 'all'. 00:09:12.507 The Meson build system 00:09:12.507 Version: 1.5.0 00:09:12.507 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:09:12.507 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:09:12.507 Build type: native build 00:09:12.508 Program cat found: YES (/usr/bin/cat) 00:09:12.508 Project name: DPDK 00:09:12.508 Project version: 24.03.0 00:09:12.508 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:09:12.508 C linker for the host machine: cc ld.bfd 2.40-14 00:09:12.508 Host machine cpu family: x86_64 00:09:12.508 Host machine cpu: x86_64 00:09:12.508 Message: ## Building in Developer Mode ## 00:09:12.508 Program pkg-config found: YES (/usr/bin/pkg-config) 00:09:12.508 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:09:12.508 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:09:12.508 Program python3 found: YES (/usr/bin/python3) 00:09:12.508 Program cat found: YES (/usr/bin/cat) 00:09:12.508 Compiler for C supports arguments -march=native: YES 00:09:12.508 Checking for size of "void *" : 8 00:09:12.508 Checking for size of "void *" : 8 (cached) 00:09:12.508 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:09:12.508 Library m found: YES 00:09:12.508 Library numa found: YES 00:09:12.508 Has header "numaif.h" : YES 00:09:12.508 Library fdt found: NO 00:09:12.508 Library execinfo found: NO 00:09:12.508 Has header "execinfo.h" : YES 00:09:12.508 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:09:12.508 Run-time dependency libarchive found: NO (tried pkgconfig) 00:09:12.508 Run-time dependency libbsd found: NO (tried pkgconfig) 00:09:12.508 Run-time dependency jansson found: NO (tried pkgconfig) 00:09:12.508 Run-time dependency openssl found: YES 3.1.1 00:09:12.508 Run-time dependency libpcap found: YES 1.10.4 00:09:12.508 Has header "pcap.h" with dependency libpcap: YES 00:09:12.508 Compiler for C supports arguments -Wcast-qual: YES 00:09:12.508 Compiler for C supports arguments -Wdeprecated: YES 00:09:12.508 Compiler for C supports arguments -Wformat: YES 00:09:12.508 Compiler for C supports arguments -Wformat-nonliteral: NO 00:09:12.508 Compiler for C supports arguments -Wformat-security: NO 00:09:12.508 Compiler for C supports arguments -Wmissing-declarations: YES 00:09:12.508 Compiler for C supports arguments -Wmissing-prototypes: YES 00:09:12.508 Compiler for C supports arguments -Wnested-externs: YES 00:09:12.508 Compiler for C supports arguments -Wold-style-definition: YES 00:09:12.508 Compiler for C supports arguments -Wpointer-arith: YES 00:09:12.508 Compiler for C supports arguments -Wsign-compare: YES 00:09:12.508 Compiler for C supports arguments -Wstrict-prototypes: YES 00:09:12.508 Compiler for C supports arguments -Wundef: YES 00:09:12.508 Compiler for C supports arguments -Wwrite-strings: YES 00:09:12.508 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:09:12.508 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:09:12.508 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:09:12.508 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:09:12.508 Program objdump found: YES (/usr/bin/objdump) 00:09:12.508 Compiler for C supports arguments -mavx512f: YES 00:09:12.508 Checking if "AVX512 checking" compiles: YES 00:09:12.508 Fetching value of define "__SSE4_2__" : 1 00:09:12.508 Fetching value of define "__AES__" : 1 00:09:12.508 Fetching value of define "__AVX__" : 1 00:09:12.508 Fetching value of define "__AVX2__" : 1 00:09:12.508 Fetching value of define "__AVX512BW__" : 1 00:09:12.508 Fetching value of define "__AVX512CD__" : 1 00:09:12.508 Fetching value of define "__AVX512DQ__" : 1 00:09:12.508 Fetching value of define "__AVX512F__" : 1 00:09:12.508 Fetching value of define "__AVX512VL__" : 1 00:09:12.508 Fetching value of define "__PCLMUL__" : 1 00:09:12.508 Fetching value of define "__RDRND__" : 1 00:09:12.508 Fetching value of define "__RDSEED__" : 1 00:09:12.508 Fetching value of define "__VPCLMULQDQ__" : 1 00:09:12.508 Fetching value of define "__znver1__" : (undefined) 00:09:12.508 Fetching value of define "__znver2__" : (undefined) 00:09:12.508 Fetching value of define "__znver3__" : (undefined) 00:09:12.508 Fetching value of define "__znver4__" : (undefined) 00:09:12.508 Compiler for C supports arguments -Wno-format-truncation: YES 00:09:12.508 Message: lib/log: Defining dependency "log" 00:09:12.508 Message: lib/kvargs: Defining dependency "kvargs" 00:09:12.508 Message: lib/telemetry: Defining dependency "telemetry" 00:09:12.508 Checking for function "getentropy" : NO 00:09:12.508 Message: lib/eal: Defining dependency "eal" 00:09:12.508 Message: lib/ring: Defining dependency "ring" 00:09:12.508 Message: lib/rcu: Defining dependency "rcu" 00:09:12.508 Message: lib/mempool: Defining dependency "mempool" 00:09:12.508 Message: lib/mbuf: Defining dependency "mbuf" 00:09:12.508 Fetching value of define "__PCLMUL__" : 1 (cached) 00:09:12.508 Fetching value of define "__AVX512F__" : 1 (cached) 00:09:12.508 Fetching value of define "__AVX512BW__" : 1 (cached) 00:09:12.508 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:09:12.508 Fetching value of define "__AVX512VL__" : 1 (cached) 00:09:12.508 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:09:12.508 Compiler for C supports arguments -mpclmul: YES 00:09:12.508 Compiler for C supports arguments -maes: YES 00:09:12.508 Compiler for C supports arguments -mavx512f: YES (cached) 00:09:12.508 Compiler for C supports arguments -mavx512bw: YES 00:09:12.508 Compiler for C supports arguments -mavx512dq: YES 00:09:12.508 Compiler for C supports arguments -mavx512vl: YES 00:09:12.508 Compiler for C supports arguments -mvpclmulqdq: YES 00:09:12.508 Compiler for C supports arguments -mavx2: YES 00:09:12.508 Compiler for C supports arguments -mavx: YES 00:09:12.508 Message: lib/net: Defining dependency "net" 00:09:12.508 Message: lib/meter: Defining dependency "meter" 00:09:12.508 Message: lib/ethdev: Defining dependency "ethdev" 00:09:12.508 Message: lib/pci: Defining dependency "pci" 00:09:12.508 Message: lib/cmdline: Defining dependency "cmdline" 00:09:12.508 Message: lib/hash: Defining dependency "hash" 00:09:12.508 Message: lib/timer: Defining dependency "timer" 00:09:12.508 Message: lib/compressdev: Defining dependency "compressdev" 00:09:12.508 Message: lib/cryptodev: Defining dependency "cryptodev" 00:09:12.508 Message: lib/dmadev: Defining dependency "dmadev" 00:09:12.508 Compiler for C supports arguments -Wno-cast-qual: YES 00:09:12.508 Message: lib/power: Defining dependency "power" 00:09:12.508 Message: lib/reorder: Defining dependency "reorder" 00:09:12.508 Message: lib/security: Defining dependency "security" 00:09:12.508 Has header "linux/userfaultfd.h" : YES 00:09:12.508 Has header "linux/vduse.h" : YES 00:09:12.508 Message: lib/vhost: Defining dependency "vhost" 00:09:12.508 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:09:12.508 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:09:12.508 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:09:12.508 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:09:12.508 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:09:12.508 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:09:12.508 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:09:12.508 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:09:12.508 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:09:12.508 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:09:12.508 Program doxygen found: YES (/usr/local/bin/doxygen) 00:09:12.508 Configuring doxy-api-html.conf using configuration 00:09:12.508 Configuring doxy-api-man.conf using configuration 00:09:12.508 Program mandb found: YES (/usr/bin/mandb) 00:09:12.508 Program sphinx-build found: NO 00:09:12.508 Configuring rte_build_config.h using configuration 00:09:12.508 Message: 00:09:12.508 ================= 00:09:12.508 Applications Enabled 00:09:12.508 ================= 00:09:12.508 00:09:12.508 apps: 00:09:12.508 00:09:12.508 00:09:12.508 Message: 00:09:12.508 ================= 00:09:12.508 Libraries Enabled 00:09:12.508 ================= 00:09:12.508 00:09:12.508 libs: 00:09:12.508 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:09:12.508 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:09:12.508 cryptodev, dmadev, power, reorder, security, vhost, 00:09:12.508 00:09:12.508 Message: 00:09:12.508 =============== 00:09:12.508 Drivers Enabled 00:09:12.508 =============== 00:09:12.508 00:09:12.508 common: 00:09:12.508 00:09:12.508 bus: 00:09:12.508 pci, vdev, 00:09:12.508 mempool: 00:09:12.508 ring, 00:09:12.508 dma: 00:09:12.508 00:09:12.508 net: 00:09:12.508 00:09:12.508 crypto: 00:09:12.508 00:09:12.508 compress: 00:09:12.508 00:09:12.508 vdpa: 00:09:12.508 00:09:12.508 00:09:12.508 Message: 00:09:12.508 ================= 00:09:12.508 Content Skipped 00:09:12.508 ================= 00:09:12.508 00:09:12.508 apps: 00:09:12.508 dumpcap: explicitly disabled via build config 00:09:12.508 graph: explicitly disabled via build config 00:09:12.508 pdump: explicitly disabled via build config 00:09:12.508 proc-info: explicitly disabled via build config 00:09:12.508 test-acl: explicitly disabled via build config 00:09:12.508 test-bbdev: explicitly disabled via build config 00:09:12.508 test-cmdline: explicitly disabled via build config 00:09:12.508 test-compress-perf: explicitly disabled via build config 00:09:12.508 test-crypto-perf: explicitly disabled via build config 00:09:12.508 test-dma-perf: explicitly disabled via build config 00:09:12.508 test-eventdev: explicitly disabled via build config 00:09:12.508 test-fib: explicitly disabled via build config 00:09:12.508 test-flow-perf: explicitly disabled via build config 00:09:12.508 test-gpudev: explicitly disabled via build config 00:09:12.508 test-mldev: explicitly disabled via build config 00:09:12.508 test-pipeline: explicitly disabled via build config 00:09:12.508 test-pmd: explicitly disabled via build config 00:09:12.508 test-regex: explicitly disabled via build config 00:09:12.508 test-sad: explicitly disabled via build config 00:09:12.508 test-security-perf: explicitly disabled via build config 00:09:12.508 00:09:12.508 libs: 00:09:12.509 argparse: explicitly disabled via build config 00:09:12.509 metrics: explicitly disabled via build config 00:09:12.509 acl: explicitly disabled via build config 00:09:12.509 bbdev: explicitly disabled via build config 00:09:12.509 bitratestats: explicitly disabled via build config 00:09:12.509 bpf: explicitly disabled via build config 00:09:12.509 cfgfile: explicitly disabled via build config 00:09:12.509 distributor: explicitly disabled via build config 00:09:12.509 efd: explicitly disabled via build config 00:09:12.509 eventdev: explicitly disabled via build config 00:09:12.509 dispatcher: explicitly disabled via build config 00:09:12.509 gpudev: explicitly disabled via build config 00:09:12.509 gro: explicitly disabled via build config 00:09:12.509 gso: explicitly disabled via build config 00:09:12.509 ip_frag: explicitly disabled via build config 00:09:12.509 jobstats: explicitly disabled via build config 00:09:12.509 latencystats: explicitly disabled via build config 00:09:12.509 lpm: explicitly disabled via build config 00:09:12.509 member: explicitly disabled via build config 00:09:12.509 pcapng: explicitly disabled via build config 00:09:12.509 rawdev: explicitly disabled via build config 00:09:12.509 regexdev: explicitly disabled via build config 00:09:12.509 mldev: explicitly disabled via build config 00:09:12.509 rib: explicitly disabled via build config 00:09:12.509 sched: explicitly disabled via build config 00:09:12.509 stack: explicitly disabled via build config 00:09:12.509 ipsec: explicitly disabled via build config 00:09:12.509 pdcp: explicitly disabled via build config 00:09:12.509 fib: explicitly disabled via build config 00:09:12.509 port: explicitly disabled via build config 00:09:12.509 pdump: explicitly disabled via build config 00:09:12.509 table: explicitly disabled via build config 00:09:12.509 pipeline: explicitly disabled via build config 00:09:12.509 graph: explicitly disabled via build config 00:09:12.509 node: explicitly disabled via build config 00:09:12.509 00:09:12.509 drivers: 00:09:12.509 common/cpt: not in enabled drivers build config 00:09:12.509 common/dpaax: not in enabled drivers build config 00:09:12.509 common/iavf: not in enabled drivers build config 00:09:12.509 common/idpf: not in enabled drivers build config 00:09:12.509 common/ionic: not in enabled drivers build config 00:09:12.509 common/mvep: not in enabled drivers build config 00:09:12.509 common/octeontx: not in enabled drivers build config 00:09:12.509 bus/auxiliary: not in enabled drivers build config 00:09:12.509 bus/cdx: not in enabled drivers build config 00:09:12.509 bus/dpaa: not in enabled drivers build config 00:09:12.509 bus/fslmc: not in enabled drivers build config 00:09:12.509 bus/ifpga: not in enabled drivers build config 00:09:12.509 bus/platform: not in enabled drivers build config 00:09:12.509 bus/uacce: not in enabled drivers build config 00:09:12.509 bus/vmbus: not in enabled drivers build config 00:09:12.509 common/cnxk: not in enabled drivers build config 00:09:12.509 common/mlx5: not in enabled drivers build config 00:09:12.509 common/nfp: not in enabled drivers build config 00:09:12.509 common/nitrox: not in enabled drivers build config 00:09:12.509 common/qat: not in enabled drivers build config 00:09:12.509 common/sfc_efx: not in enabled drivers build config 00:09:12.509 mempool/bucket: not in enabled drivers build config 00:09:12.509 mempool/cnxk: not in enabled drivers build config 00:09:12.509 mempool/dpaa: not in enabled drivers build config 00:09:12.509 mempool/dpaa2: not in enabled drivers build config 00:09:12.509 mempool/octeontx: not in enabled drivers build config 00:09:12.509 mempool/stack: not in enabled drivers build config 00:09:12.509 dma/cnxk: not in enabled drivers build config 00:09:12.509 dma/dpaa: not in enabled drivers build config 00:09:12.509 dma/dpaa2: not in enabled drivers build config 00:09:12.509 dma/hisilicon: not in enabled drivers build config 00:09:12.509 dma/idxd: not in enabled drivers build config 00:09:12.509 dma/ioat: not in enabled drivers build config 00:09:12.509 dma/skeleton: not in enabled drivers build config 00:09:12.509 net/af_packet: not in enabled drivers build config 00:09:12.509 net/af_xdp: not in enabled drivers build config 00:09:12.509 net/ark: not in enabled drivers build config 00:09:12.509 net/atlantic: not in enabled drivers build config 00:09:12.509 net/avp: not in enabled drivers build config 00:09:12.509 net/axgbe: not in enabled drivers build config 00:09:12.509 net/bnx2x: not in enabled drivers build config 00:09:12.509 net/bnxt: not in enabled drivers build config 00:09:12.509 net/bonding: not in enabled drivers build config 00:09:12.509 net/cnxk: not in enabled drivers build config 00:09:12.509 net/cpfl: not in enabled drivers build config 00:09:12.509 net/cxgbe: not in enabled drivers build config 00:09:12.509 net/dpaa: not in enabled drivers build config 00:09:12.509 net/dpaa2: not in enabled drivers build config 00:09:12.509 net/e1000: not in enabled drivers build config 00:09:12.509 net/ena: not in enabled drivers build config 00:09:12.509 net/enetc: not in enabled drivers build config 00:09:12.509 net/enetfec: not in enabled drivers build config 00:09:12.509 net/enic: not in enabled drivers build config 00:09:12.509 net/failsafe: not in enabled drivers build config 00:09:12.509 net/fm10k: not in enabled drivers build config 00:09:12.509 net/gve: not in enabled drivers build config 00:09:12.509 net/hinic: not in enabled drivers build config 00:09:12.509 net/hns3: not in enabled drivers build config 00:09:12.509 net/i40e: not in enabled drivers build config 00:09:12.509 net/iavf: not in enabled drivers build config 00:09:12.509 net/ice: not in enabled drivers build config 00:09:12.509 net/idpf: not in enabled drivers build config 00:09:12.509 net/igc: not in enabled drivers build config 00:09:12.509 net/ionic: not in enabled drivers build config 00:09:12.509 net/ipn3ke: not in enabled drivers build config 00:09:12.509 net/ixgbe: not in enabled drivers build config 00:09:12.509 net/mana: not in enabled drivers build config 00:09:12.509 net/memif: not in enabled drivers build config 00:09:12.509 net/mlx4: not in enabled drivers build config 00:09:12.509 net/mlx5: not in enabled drivers build config 00:09:12.509 net/mvneta: not in enabled drivers build config 00:09:12.509 net/mvpp2: not in enabled drivers build config 00:09:12.509 net/netvsc: not in enabled drivers build config 00:09:12.509 net/nfb: not in enabled drivers build config 00:09:12.509 net/nfp: not in enabled drivers build config 00:09:12.509 net/ngbe: not in enabled drivers build config 00:09:12.509 net/null: not in enabled drivers build config 00:09:12.509 net/octeontx: not in enabled drivers build config 00:09:12.509 net/octeon_ep: not in enabled drivers build config 00:09:12.509 net/pcap: not in enabled drivers build config 00:09:12.509 net/pfe: not in enabled drivers build config 00:09:12.509 net/qede: not in enabled drivers build config 00:09:12.509 net/ring: not in enabled drivers build config 00:09:12.509 net/sfc: not in enabled drivers build config 00:09:12.509 net/softnic: not in enabled drivers build config 00:09:12.509 net/tap: not in enabled drivers build config 00:09:12.509 net/thunderx: not in enabled drivers build config 00:09:12.509 net/txgbe: not in enabled drivers build config 00:09:12.509 net/vdev_netvsc: not in enabled drivers build config 00:09:12.509 net/vhost: not in enabled drivers build config 00:09:12.509 net/virtio: not in enabled drivers build config 00:09:12.509 net/vmxnet3: not in enabled drivers build config 00:09:12.509 raw/*: missing internal dependency, "rawdev" 00:09:12.509 crypto/armv8: not in enabled drivers build config 00:09:12.509 crypto/bcmfs: not in enabled drivers build config 00:09:12.509 crypto/caam_jr: not in enabled drivers build config 00:09:12.509 crypto/ccp: not in enabled drivers build config 00:09:12.509 crypto/cnxk: not in enabled drivers build config 00:09:12.509 crypto/dpaa_sec: not in enabled drivers build config 00:09:12.509 crypto/dpaa2_sec: not in enabled drivers build config 00:09:12.509 crypto/ipsec_mb: not in enabled drivers build config 00:09:12.509 crypto/mlx5: not in enabled drivers build config 00:09:12.509 crypto/mvsam: not in enabled drivers build config 00:09:12.509 crypto/nitrox: not in enabled drivers build config 00:09:12.509 crypto/null: not in enabled drivers build config 00:09:12.509 crypto/octeontx: not in enabled drivers build config 00:09:12.509 crypto/openssl: not in enabled drivers build config 00:09:12.509 crypto/scheduler: not in enabled drivers build config 00:09:12.509 crypto/uadk: not in enabled drivers build config 00:09:12.509 crypto/virtio: not in enabled drivers build config 00:09:12.509 compress/isal: not in enabled drivers build config 00:09:12.509 compress/mlx5: not in enabled drivers build config 00:09:12.509 compress/nitrox: not in enabled drivers build config 00:09:12.509 compress/octeontx: not in enabled drivers build config 00:09:12.509 compress/zlib: not in enabled drivers build config 00:09:12.509 regex/*: missing internal dependency, "regexdev" 00:09:12.509 ml/*: missing internal dependency, "mldev" 00:09:12.509 vdpa/ifc: not in enabled drivers build config 00:09:12.509 vdpa/mlx5: not in enabled drivers build config 00:09:12.509 vdpa/nfp: not in enabled drivers build config 00:09:12.509 vdpa/sfc: not in enabled drivers build config 00:09:12.509 event/*: missing internal dependency, "eventdev" 00:09:12.509 baseband/*: missing internal dependency, "bbdev" 00:09:12.509 gpu/*: missing internal dependency, "gpudev" 00:09:12.509 00:09:12.509 00:09:12.509 Build targets in project: 84 00:09:12.509 00:09:12.509 DPDK 24.03.0 00:09:12.509 00:09:12.509 User defined options 00:09:12.509 buildtype : debug 00:09:12.509 default_library : shared 00:09:12.509 libdir : lib 00:09:12.509 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:09:12.509 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:09:12.509 c_link_args : 00:09:12.509 cpu_instruction_set: native 00:09:12.509 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:09:12.509 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:09:12.509 enable_docs : false 00:09:12.509 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:09:12.509 enable_kmods : false 00:09:12.509 max_lcores : 128 00:09:12.509 tests : false 00:09:12.510 00:09:12.510 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:09:12.510 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:09:12.510 [1/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:09:12.510 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:09:12.510 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:09:12.510 [4/267] Linking static target lib/librte_kvargs.a 00:09:12.510 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:09:12.510 [6/267] Linking static target lib/librte_log.a 00:09:12.510 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:09:12.510 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:09:12.510 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:09:12.510 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:09:12.510 [11/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:09:12.770 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:09:12.770 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:09:12.770 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:09:12.770 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:09:12.770 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:09:12.770 [17/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:09:12.770 [18/267] Linking static target lib/librte_telemetry.a 00:09:13.030 [19/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:09:13.030 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:09:13.030 [21/267] Linking target lib/librte_log.so.24.1 00:09:13.030 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:09:13.299 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:09:13.299 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:09:13.299 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:09:13.299 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:09:13.299 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:09:13.299 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:09:13.299 [29/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:09:13.299 [30/267] Linking target lib/librte_kvargs.so.24.1 00:09:13.299 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:09:13.299 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:09:13.560 [33/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:09:13.561 [34/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:09:13.561 [35/267] Linking target lib/librte_telemetry.so.24.1 00:09:13.561 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:09:13.561 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:09:13.561 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:09:13.821 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:09:13.821 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:09:13.821 [41/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:09:13.821 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:09:13.821 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:09:13.821 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:09:13.821 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:09:13.821 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:09:14.081 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:09:14.081 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:09:14.081 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:09:14.081 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:09:14.081 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:09:14.081 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:09:14.341 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:09:14.341 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:09:14.341 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:09:14.341 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:09:14.341 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:09:14.602 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:09:14.602 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:09:14.602 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:09:14.602 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:09:14.602 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:09:14.602 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:09:14.602 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:09:14.864 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:09:14.864 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:09:14.864 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:09:14.864 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:09:14.864 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:09:14.864 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:09:15.124 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:09:15.124 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:09:15.124 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:09:15.124 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:09:15.124 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:09:15.385 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:09:15.385 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:09:15.385 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:09:15.385 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:09:15.385 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:09:15.385 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:09:15.385 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:09:15.646 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:09:15.646 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:09:15.646 [85/267] Linking static target lib/librte_eal.a 00:09:15.646 [86/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:09:15.646 [87/267] Linking static target lib/librte_ring.a 00:09:15.646 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:09:15.907 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:09:15.907 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:09:15.907 [91/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:09:15.907 [92/267] Linking static target lib/librte_rcu.a 00:09:15.907 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:09:16.169 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:09:16.169 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:09:16.169 [96/267] Linking static target lib/librte_mempool.a 00:09:16.169 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:09:16.430 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:09:16.430 [99/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:09:16.430 [100/267] Linking static target lib/librte_mbuf.a 00:09:16.430 [101/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:09:16.430 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:09:16.430 [103/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:09:16.430 [104/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:09:16.430 [105/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:09:16.710 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:09:16.710 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:09:16.710 [108/267] Linking static target lib/librte_net.a 00:09:16.710 [109/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:09:16.710 [110/267] Linking static target lib/librte_meter.a 00:09:16.710 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:09:16.970 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:09:16.970 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:09:16.970 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:09:17.231 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:09:17.231 [116/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:09:17.231 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:09:17.231 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:09:17.231 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:09:17.231 [120/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:09:17.493 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:09:17.755 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:09:17.755 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:09:17.755 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:09:17.755 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:09:17.755 [126/267] Linking static target lib/librte_pci.a 00:09:17.755 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:09:17.755 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:09:18.017 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:09:18.017 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:09:18.017 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:09:18.017 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:09:18.017 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:09:18.017 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:09:18.017 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:09:18.017 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:09:18.017 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:09:18.017 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:09:18.017 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:09:18.017 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:09:18.017 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:09:18.279 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:09:18.279 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:09:18.279 [144/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:09:18.279 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:09:18.279 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:09:18.279 [147/267] Linking static target lib/librte_ethdev.a 00:09:18.279 [148/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:09:18.538 [149/267] Linking static target lib/librte_cmdline.a 00:09:18.538 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:09:18.538 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:09:18.538 [152/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:09:18.538 [153/267] Linking static target lib/librte_timer.a 00:09:18.538 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:09:18.538 [155/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:09:18.796 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:09:18.796 [157/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:09:18.796 [158/267] Linking static target lib/librte_hash.a 00:09:18.797 [159/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:09:18.797 [160/267] Linking static target lib/librte_compressdev.a 00:09:19.055 [161/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:09:19.055 [162/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:09:19.055 [163/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:09:19.055 [164/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:09:19.055 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:09:19.313 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:09:19.313 [167/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:09:19.313 [168/267] Linking static target lib/librte_cryptodev.a 00:09:19.313 [169/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:09:19.313 [170/267] Linking static target lib/librte_dmadev.a 00:09:19.313 [171/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:09:19.313 [172/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:09:19.313 [173/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:09:19.577 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:09:19.577 [175/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:19.577 [176/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:09:19.577 [177/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:09:19.578 [178/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:09:19.874 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:09:19.874 [180/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:09:19.874 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:09:19.874 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:09:19.874 [183/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:19.874 [184/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:09:20.132 [185/267] Linking static target lib/librte_reorder.a 00:09:20.132 [186/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:09:20.132 [187/267] Linking static target lib/librte_power.a 00:09:20.132 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:09:20.132 [189/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:09:20.132 [190/267] Linking static target lib/librte_security.a 00:09:20.390 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:09:20.390 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:09:20.390 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:09:20.648 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:09:20.907 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:09:20.907 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:09:20.907 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:09:20.907 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:09:21.164 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:09:21.164 [200/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:09:21.164 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:09:21.164 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:09:21.164 [203/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:09:21.164 [204/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:21.164 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:09:21.422 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:09:21.422 [207/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:09:21.422 [208/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:09:21.422 [209/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:09:21.422 [210/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:09:21.422 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:09:21.422 [212/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:09:21.422 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:09:21.422 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:09:21.422 [215/267] Linking static target drivers/librte_bus_vdev.a 00:09:21.680 [216/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:09:21.680 [217/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:09:21.680 [218/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:09:21.680 [219/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:09:21.680 [220/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:09:21.680 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:21.680 [222/267] Linking static target drivers/librte_bus_pci.a 00:09:21.680 [223/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:09:21.937 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:09:21.937 [225/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:09:21.937 [226/267] Linking static target drivers/librte_mempool_ring.a 00:09:22.194 [227/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:09:22.453 [228/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:09:22.710 [229/267] Linking static target lib/librte_vhost.a 00:09:23.275 [230/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:09:23.275 [231/267] Linking target lib/librte_eal.so.24.1 00:09:23.532 [232/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:09:23.532 [233/267] Linking target lib/librte_timer.so.24.1 00:09:23.532 [234/267] Linking target lib/librte_dmadev.so.24.1 00:09:23.532 [235/267] Linking target lib/librte_pci.so.24.1 00:09:23.532 [236/267] Linking target lib/librte_meter.so.24.1 00:09:23.532 [237/267] Linking target drivers/librte_bus_vdev.so.24.1 00:09:23.532 [238/267] Linking target lib/librte_ring.so.24.1 00:09:23.532 [239/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:09:23.532 [240/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:09:23.532 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:09:23.532 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:09:23.532 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:09:23.532 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:09:23.791 [245/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:09:23.791 [246/267] Linking target lib/librte_rcu.so.24.1 00:09:23.791 [247/267] Linking target lib/librte_mempool.so.24.1 00:09:23.791 [248/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:09:23.791 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:09:23.791 [250/267] Linking target drivers/librte_mempool_ring.so.24.1 00:09:23.791 [251/267] Linking target lib/librte_mbuf.so.24.1 00:09:23.791 [252/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:24.048 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:09:24.048 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:09:24.048 [255/267] Linking target lib/librte_reorder.so.24.1 00:09:24.048 [256/267] Linking target lib/librte_net.so.24.1 00:09:24.048 [257/267] Linking target lib/librte_compressdev.so.24.1 00:09:24.048 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:09:24.048 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:09:24.048 [260/267] Linking target lib/librte_cmdline.so.24.1 00:09:24.048 [261/267] Linking target lib/librte_hash.so.24.1 00:09:24.048 [262/267] Linking target lib/librte_security.so.24.1 00:09:24.048 [263/267] Linking target lib/librte_ethdev.so.24.1 00:09:24.305 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:09:24.305 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:09:24.305 [266/267] Linking target lib/librte_power.so.24.1 00:09:24.305 [267/267] Linking target lib/librte_vhost.so.24.1 00:09:24.305 INFO: autodetecting backend as ninja 00:09:24.305 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:09:42.434 CC lib/ut/ut.o 00:09:42.434 CC lib/ut_mock/mock.o 00:09:42.434 CC lib/log/log.o 00:09:42.434 CC lib/log/log_deprecated.o 00:09:42.434 CC lib/log/log_flags.o 00:09:42.434 LIB libspdk_ut.a 00:09:42.434 LIB libspdk_ut_mock.a 00:09:42.434 SO libspdk_ut.so.2.0 00:09:42.434 LIB libspdk_log.a 00:09:42.434 SO libspdk_ut_mock.so.6.0 00:09:42.434 SO libspdk_log.so.7.1 00:09:42.434 SYMLINK libspdk_ut.so 00:09:42.434 SYMLINK libspdk_ut_mock.so 00:09:42.434 SYMLINK libspdk_log.so 00:09:42.434 CXX lib/trace_parser/trace.o 00:09:42.434 CC lib/dma/dma.o 00:09:42.434 CC lib/ioat/ioat.o 00:09:42.434 CC lib/util/base64.o 00:09:42.434 CC lib/util/bit_array.o 00:09:42.434 CC lib/util/crc32.o 00:09:42.434 CC lib/util/cpuset.o 00:09:42.434 CC lib/util/crc32c.o 00:09:42.434 CC lib/util/crc16.o 00:09:42.434 CC lib/vfio_user/host/vfio_user_pci.o 00:09:42.434 CC lib/util/crc32_ieee.o 00:09:42.434 CC lib/util/crc64.o 00:09:42.434 CC lib/util/dif.o 00:09:42.434 CC lib/util/fd.o 00:09:42.434 LIB libspdk_dma.a 00:09:42.434 CC lib/util/fd_group.o 00:09:42.434 SO libspdk_dma.so.5.0 00:09:42.434 CC lib/util/file.o 00:09:42.434 SYMLINK libspdk_dma.so 00:09:42.434 CC lib/util/hexlify.o 00:09:42.434 CC lib/util/iov.o 00:09:42.434 LIB libspdk_ioat.a 00:09:42.434 CC lib/vfio_user/host/vfio_user.o 00:09:42.434 CC lib/util/math.o 00:09:42.434 CC lib/util/net.o 00:09:42.434 SO libspdk_ioat.so.7.0 00:09:42.434 SYMLINK libspdk_ioat.so 00:09:42.434 CC lib/util/pipe.o 00:09:42.434 CC lib/util/strerror_tls.o 00:09:42.434 CC lib/util/string.o 00:09:42.434 CC lib/util/uuid.o 00:09:42.434 CC lib/util/xor.o 00:09:42.434 CC lib/util/zipf.o 00:09:42.434 CC lib/util/md5.o 00:09:42.434 LIB libspdk_vfio_user.a 00:09:42.434 SO libspdk_vfio_user.so.5.0 00:09:42.434 SYMLINK libspdk_vfio_user.so 00:09:42.434 LIB libspdk_util.a 00:09:42.434 SO libspdk_util.so.10.1 00:09:42.434 SYMLINK libspdk_util.so 00:09:42.434 LIB libspdk_trace_parser.a 00:09:42.434 SO libspdk_trace_parser.so.6.0 00:09:42.434 CC lib/json/json_parse.o 00:09:42.434 CC lib/json/json_util.o 00:09:42.434 CC lib/json/json_write.o 00:09:42.434 CC lib/idxd/idxd.o 00:09:42.434 CC lib/env_dpdk/env.o 00:09:42.434 CC lib/idxd/idxd_user.o 00:09:42.434 CC lib/vmd/vmd.o 00:09:42.434 CC lib/rdma_utils/rdma_utils.o 00:09:42.434 CC lib/conf/conf.o 00:09:42.434 SYMLINK libspdk_trace_parser.so 00:09:42.434 CC lib/vmd/led.o 00:09:42.434 CC lib/env_dpdk/memory.o 00:09:42.434 LIB libspdk_conf.a 00:09:42.434 SO libspdk_conf.so.6.0 00:09:42.434 CC lib/env_dpdk/pci.o 00:09:42.434 CC lib/idxd/idxd_kernel.o 00:09:42.434 CC lib/env_dpdk/init.o 00:09:42.434 SYMLINK libspdk_conf.so 00:09:42.434 CC lib/env_dpdk/threads.o 00:09:42.434 LIB libspdk_json.a 00:09:42.434 LIB libspdk_rdma_utils.a 00:09:42.434 SO libspdk_rdma_utils.so.1.0 00:09:42.434 SO libspdk_json.so.6.0 00:09:42.434 CC lib/env_dpdk/pci_ioat.o 00:09:42.434 SYMLINK libspdk_rdma_utils.so 00:09:42.434 SYMLINK libspdk_json.so 00:09:42.434 CC lib/env_dpdk/pci_virtio.o 00:09:42.434 CC lib/env_dpdk/pci_vmd.o 00:09:42.434 CC lib/env_dpdk/pci_idxd.o 00:09:42.434 CC lib/env_dpdk/pci_event.o 00:09:42.434 CC lib/rdma_provider/common.o 00:09:42.434 LIB libspdk_idxd.a 00:09:42.434 CC lib/env_dpdk/sigbus_handler.o 00:09:42.434 CC lib/env_dpdk/pci_dpdk.o 00:09:42.434 SO libspdk_idxd.so.12.1 00:09:42.434 LIB libspdk_vmd.a 00:09:42.434 CC lib/jsonrpc/jsonrpc_server.o 00:09:42.434 SO libspdk_vmd.so.6.0 00:09:42.434 CC lib/rdma_provider/rdma_provider_verbs.o 00:09:42.434 SYMLINK libspdk_idxd.so 00:09:42.434 CC lib/env_dpdk/pci_dpdk_2207.o 00:09:42.434 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:09:42.434 SYMLINK libspdk_vmd.so 00:09:42.434 CC lib/jsonrpc/jsonrpc_client.o 00:09:42.434 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:09:42.434 CC lib/env_dpdk/pci_dpdk_2211.o 00:09:42.434 LIB libspdk_rdma_provider.a 00:09:42.434 SO libspdk_rdma_provider.so.7.0 00:09:42.434 LIB libspdk_jsonrpc.a 00:09:42.434 SYMLINK libspdk_rdma_provider.so 00:09:42.434 SO libspdk_jsonrpc.so.6.0 00:09:42.434 SYMLINK libspdk_jsonrpc.so 00:09:42.434 LIB libspdk_env_dpdk.a 00:09:42.434 SO libspdk_env_dpdk.so.15.1 00:09:42.434 SYMLINK libspdk_env_dpdk.so 00:09:42.434 CC lib/rpc/rpc.o 00:09:42.692 LIB libspdk_rpc.a 00:09:42.692 SO libspdk_rpc.so.6.0 00:09:42.692 SYMLINK libspdk_rpc.so 00:09:42.950 CC lib/notify/notify_rpc.o 00:09:42.950 CC lib/notify/notify.o 00:09:42.950 CC lib/keyring/keyring_rpc.o 00:09:42.950 CC lib/keyring/keyring.o 00:09:42.950 CC lib/trace/trace.o 00:09:42.950 CC lib/trace/trace_flags.o 00:09:42.950 CC lib/trace/trace_rpc.o 00:09:42.951 LIB libspdk_notify.a 00:09:42.951 SO libspdk_notify.so.6.0 00:09:42.951 LIB libspdk_keyring.a 00:09:42.951 SYMLINK libspdk_notify.so 00:09:42.951 SO libspdk_keyring.so.2.0 00:09:42.951 LIB libspdk_trace.a 00:09:43.211 SYMLINK libspdk_keyring.so 00:09:43.211 SO libspdk_trace.so.11.0 00:09:43.211 SYMLINK libspdk_trace.so 00:09:43.211 CC lib/thread/thread.o 00:09:43.211 CC lib/thread/iobuf.o 00:09:43.211 CC lib/sock/sock_rpc.o 00:09:43.211 CC lib/sock/sock.o 00:09:43.782 LIB libspdk_sock.a 00:09:43.782 SO libspdk_sock.so.10.0 00:09:43.782 SYMLINK libspdk_sock.so 00:09:44.055 CC lib/nvme/nvme_ctrlr.o 00:09:44.055 CC lib/nvme/nvme_fabric.o 00:09:44.055 CC lib/nvme/nvme_ctrlr_cmd.o 00:09:44.055 CC lib/nvme/nvme_ns_cmd.o 00:09:44.055 CC lib/nvme/nvme_qpair.o 00:09:44.055 CC lib/nvme/nvme_pcie.o 00:09:44.055 CC lib/nvme/nvme.o 00:09:44.055 CC lib/nvme/nvme_ns.o 00:09:44.055 CC lib/nvme/nvme_pcie_common.o 00:09:44.621 CC lib/nvme/nvme_quirks.o 00:09:44.621 CC lib/nvme/nvme_transport.o 00:09:44.621 LIB libspdk_thread.a 00:09:44.621 CC lib/nvme/nvme_discovery.o 00:09:44.621 SO libspdk_thread.so.11.0 00:09:44.621 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:09:44.621 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:09:44.621 SYMLINK libspdk_thread.so 00:09:44.621 CC lib/nvme/nvme_tcp.o 00:09:44.621 CC lib/nvme/nvme_opal.o 00:09:44.621 CC lib/nvme/nvme_io_msg.o 00:09:44.878 CC lib/nvme/nvme_poll_group.o 00:09:44.878 CC lib/nvme/nvme_zns.o 00:09:45.136 CC lib/nvme/nvme_stubs.o 00:09:45.136 CC lib/nvme/nvme_auth.o 00:09:45.136 CC lib/nvme/nvme_cuse.o 00:09:45.136 CC lib/nvme/nvme_rdma.o 00:09:45.136 CC lib/accel/accel.o 00:09:45.136 CC lib/accel/accel_rpc.o 00:09:45.136 CC lib/accel/accel_sw.o 00:09:45.394 CC lib/init/json_config.o 00:09:45.394 CC lib/blob/blobstore.o 00:09:45.664 CC lib/virtio/virtio.o 00:09:45.664 CC lib/virtio/virtio_vhost_user.o 00:09:45.664 CC lib/fsdev/fsdev.o 00:09:45.664 CC lib/init/subsystem.o 00:09:45.925 CC lib/virtio/virtio_vfio_user.o 00:09:45.925 CC lib/fsdev/fsdev_io.o 00:09:45.925 CC lib/blob/request.o 00:09:45.925 CC lib/init/subsystem_rpc.o 00:09:45.925 CC lib/blob/zeroes.o 00:09:45.925 CC lib/virtio/virtio_pci.o 00:09:45.925 CC lib/init/rpc.o 00:09:45.925 CC lib/fsdev/fsdev_rpc.o 00:09:45.925 CC lib/blob/blob_bs_dev.o 00:09:45.925 LIB libspdk_accel.a 00:09:46.183 SO libspdk_accel.so.16.0 00:09:46.183 LIB libspdk_virtio.a 00:09:46.183 LIB libspdk_fsdev.a 00:09:46.183 LIB libspdk_init.a 00:09:46.183 SYMLINK libspdk_accel.so 00:09:46.183 SO libspdk_virtio.so.7.0 00:09:46.183 SO libspdk_fsdev.so.2.0 00:09:46.183 SO libspdk_init.so.6.0 00:09:46.183 LIB libspdk_nvme.a 00:09:46.183 SYMLINK libspdk_virtio.so 00:09:46.183 SYMLINK libspdk_fsdev.so 00:09:46.183 SYMLINK libspdk_init.so 00:09:46.440 CC lib/bdev/bdev.o 00:09:46.440 CC lib/bdev/bdev_rpc.o 00:09:46.440 CC lib/bdev/part.o 00:09:46.440 CC lib/bdev/scsi_nvme.o 00:09:46.440 CC lib/bdev/bdev_zone.o 00:09:46.440 SO libspdk_nvme.so.15.0 00:09:46.440 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:09:46.440 CC lib/event/app.o 00:09:46.440 CC lib/event/reactor.o 00:09:46.440 CC lib/event/log_rpc.o 00:09:46.440 SYMLINK libspdk_nvme.so 00:09:46.440 CC lib/event/app_rpc.o 00:09:46.440 CC lib/event/scheduler_static.o 00:09:46.697 LIB libspdk_event.a 00:09:46.697 SO libspdk_event.so.14.0 00:09:46.955 SYMLINK libspdk_event.so 00:09:46.955 LIB libspdk_fuse_dispatcher.a 00:09:46.955 SO libspdk_fuse_dispatcher.so.1.0 00:09:46.955 SYMLINK libspdk_fuse_dispatcher.so 00:09:47.909 LIB libspdk_blob.a 00:09:47.909 SO libspdk_blob.so.11.0 00:09:47.909 SYMLINK libspdk_blob.so 00:09:48.167 CC lib/lvol/lvol.o 00:09:48.167 CC lib/blobfs/blobfs.o 00:09:48.167 CC lib/blobfs/tree.o 00:09:48.425 LIB libspdk_bdev.a 00:09:48.425 SO libspdk_bdev.so.17.0 00:09:48.715 SYMLINK libspdk_bdev.so 00:09:48.715 LIB libspdk_lvol.a 00:09:48.715 SO libspdk_lvol.so.10.0 00:09:48.715 CC lib/ftl/ftl_core.o 00:09:48.715 CC lib/ftl/ftl_init.o 00:09:48.715 CC lib/ftl/ftl_layout.o 00:09:48.715 CC lib/scsi/dev.o 00:09:48.715 CC lib/nvmf/ctrlr.o 00:09:48.715 CC lib/nbd/nbd.o 00:09:48.715 CC lib/nvmf/ctrlr_discovery.o 00:09:48.715 CC lib/ublk/ublk.o 00:09:48.715 SYMLINK libspdk_lvol.so 00:09:48.715 CC lib/ublk/ublk_rpc.o 00:09:48.715 LIB libspdk_blobfs.a 00:09:48.984 SO libspdk_blobfs.so.10.0 00:09:48.984 SYMLINK libspdk_blobfs.so 00:09:48.984 CC lib/nvmf/ctrlr_bdev.o 00:09:48.984 CC lib/scsi/lun.o 00:09:48.984 CC lib/scsi/port.o 00:09:48.984 CC lib/scsi/scsi.o 00:09:48.984 CC lib/scsi/scsi_bdev.o 00:09:48.984 CC lib/ftl/ftl_debug.o 00:09:48.984 CC lib/nbd/nbd_rpc.o 00:09:48.984 CC lib/ftl/ftl_io.o 00:09:48.984 CC lib/ftl/ftl_sb.o 00:09:49.242 CC lib/ftl/ftl_l2p.o 00:09:49.242 CC lib/ftl/ftl_l2p_flat.o 00:09:49.242 LIB libspdk_nbd.a 00:09:49.242 SO libspdk_nbd.so.7.0 00:09:49.242 CC lib/scsi/scsi_pr.o 00:09:49.242 CC lib/scsi/scsi_rpc.o 00:09:49.242 LIB libspdk_ublk.a 00:09:49.242 CC lib/nvmf/subsystem.o 00:09:49.242 SYMLINK libspdk_nbd.so 00:09:49.242 CC lib/scsi/task.o 00:09:49.242 SO libspdk_ublk.so.3.0 00:09:49.242 CC lib/ftl/ftl_nv_cache.o 00:09:49.242 CC lib/nvmf/nvmf.o 00:09:49.500 SYMLINK libspdk_ublk.so 00:09:49.500 CC lib/ftl/ftl_band.o 00:09:49.500 CC lib/ftl/ftl_band_ops.o 00:09:49.500 CC lib/nvmf/nvmf_rpc.o 00:09:49.500 CC lib/ftl/ftl_writer.o 00:09:49.500 CC lib/ftl/ftl_rq.o 00:09:49.500 LIB libspdk_scsi.a 00:09:49.500 SO libspdk_scsi.so.9.0 00:09:49.758 CC lib/nvmf/transport.o 00:09:49.758 SYMLINK libspdk_scsi.so 00:09:49.758 CC lib/ftl/ftl_reloc.o 00:09:49.758 CC lib/ftl/ftl_l2p_cache.o 00:09:49.758 CC lib/ftl/ftl_p2l.o 00:09:49.758 CC lib/ftl/ftl_p2l_log.o 00:09:50.016 CC lib/ftl/mngt/ftl_mngt.o 00:09:50.016 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:09:50.016 CC lib/iscsi/conn.o 00:09:50.016 CC lib/nvmf/tcp.o 00:09:50.016 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:09:50.016 CC lib/ftl/mngt/ftl_mngt_startup.o 00:09:50.016 CC lib/ftl/mngt/ftl_mngt_md.o 00:09:50.016 CC lib/ftl/mngt/ftl_mngt_misc.o 00:09:50.016 CC lib/iscsi/init_grp.o 00:09:50.273 CC lib/iscsi/iscsi.o 00:09:50.273 CC lib/iscsi/param.o 00:09:50.273 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:09:50.273 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:09:50.273 CC lib/iscsi/portal_grp.o 00:09:50.273 CC lib/nvmf/stubs.o 00:09:50.273 CC lib/iscsi/tgt_node.o 00:09:50.273 CC lib/iscsi/iscsi_subsystem.o 00:09:50.273 CC lib/nvmf/mdns_server.o 00:09:50.273 CC lib/ftl/mngt/ftl_mngt_band.o 00:09:50.552 CC lib/iscsi/iscsi_rpc.o 00:09:50.552 CC lib/iscsi/task.o 00:09:50.552 CC lib/nvmf/rdma.o 00:09:50.552 CC lib/vhost/vhost.o 00:09:50.552 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:09:50.823 CC lib/vhost/vhost_rpc.o 00:09:50.823 CC lib/vhost/vhost_scsi.o 00:09:50.823 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:09:50.823 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:09:50.823 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:09:50.823 CC lib/vhost/vhost_blk.o 00:09:51.080 CC lib/ftl/utils/ftl_conf.o 00:09:51.080 CC lib/vhost/rte_vhost_user.o 00:09:51.080 CC lib/ftl/utils/ftl_md.o 00:09:51.080 CC lib/nvmf/auth.o 00:09:51.339 LIB libspdk_iscsi.a 00:09:51.339 CC lib/ftl/utils/ftl_mempool.o 00:09:51.339 SO libspdk_iscsi.so.8.0 00:09:51.339 CC lib/ftl/utils/ftl_bitmap.o 00:09:51.339 SYMLINK libspdk_iscsi.so 00:09:51.339 CC lib/ftl/utils/ftl_property.o 00:09:51.339 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:09:51.339 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:09:51.597 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:09:51.597 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:09:51.597 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:09:51.597 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:09:51.597 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:09:51.597 CC lib/ftl/upgrade/ftl_sb_v3.o 00:09:51.597 CC lib/ftl/upgrade/ftl_sb_v5.o 00:09:51.597 CC lib/ftl/nvc/ftl_nvc_dev.o 00:09:51.597 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:09:51.597 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:09:51.597 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:09:51.855 CC lib/ftl/base/ftl_base_dev.o 00:09:51.855 CC lib/ftl/base/ftl_base_bdev.o 00:09:51.855 CC lib/ftl/ftl_trace.o 00:09:51.855 LIB libspdk_vhost.a 00:09:51.855 SO libspdk_vhost.so.8.0 00:09:52.114 SYMLINK libspdk_vhost.so 00:09:52.114 LIB libspdk_ftl.a 00:09:52.114 SO libspdk_ftl.so.9.0 00:09:52.374 LIB libspdk_nvmf.a 00:09:52.374 SO libspdk_nvmf.so.20.0 00:09:52.374 SYMLINK libspdk_ftl.so 00:09:52.643 SYMLINK libspdk_nvmf.so 00:09:52.901 CC module/env_dpdk/env_dpdk_rpc.o 00:09:52.901 CC module/sock/uring/uring.o 00:09:52.901 CC module/accel/ioat/accel_ioat.o 00:09:52.901 CC module/sock/posix/posix.o 00:09:52.901 CC module/scheduler/dynamic/scheduler_dynamic.o 00:09:52.901 CC module/accel/error/accel_error.o 00:09:52.901 CC module/accel/dsa/accel_dsa.o 00:09:52.901 CC module/fsdev/aio/fsdev_aio.o 00:09:52.901 CC module/blob/bdev/blob_bdev.o 00:09:52.901 CC module/keyring/file/keyring.o 00:09:52.901 LIB libspdk_env_dpdk_rpc.a 00:09:52.901 SO libspdk_env_dpdk_rpc.so.6.0 00:09:52.901 SYMLINK libspdk_env_dpdk_rpc.so 00:09:52.901 CC module/accel/ioat/accel_ioat_rpc.o 00:09:52.901 CC module/keyring/file/keyring_rpc.o 00:09:53.159 CC module/fsdev/aio/fsdev_aio_rpc.o 00:09:53.159 LIB libspdk_scheduler_dynamic.a 00:09:53.159 CC module/accel/error/accel_error_rpc.o 00:09:53.159 SO libspdk_scheduler_dynamic.so.4.0 00:09:53.159 LIB libspdk_accel_ioat.a 00:09:53.159 LIB libspdk_blob_bdev.a 00:09:53.159 CC module/accel/dsa/accel_dsa_rpc.o 00:09:53.159 SO libspdk_accel_ioat.so.6.0 00:09:53.159 SYMLINK libspdk_scheduler_dynamic.so 00:09:53.159 SO libspdk_blob_bdev.so.11.0 00:09:53.159 LIB libspdk_keyring_file.a 00:09:53.159 SYMLINK libspdk_accel_ioat.so 00:09:53.159 LIB libspdk_accel_error.a 00:09:53.159 SO libspdk_keyring_file.so.2.0 00:09:53.159 SYMLINK libspdk_blob_bdev.so 00:09:53.159 SO libspdk_accel_error.so.2.0 00:09:53.159 SYMLINK libspdk_keyring_file.so 00:09:53.159 SYMLINK libspdk_accel_error.so 00:09:53.159 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:09:53.416 CC module/fsdev/aio/linux_aio_mgr.o 00:09:53.416 LIB libspdk_accel_dsa.a 00:09:53.416 SO libspdk_accel_dsa.so.5.0 00:09:53.416 CC module/scheduler/gscheduler/gscheduler.o 00:09:53.416 CC module/accel/iaa/accel_iaa.o 00:09:53.416 SYMLINK libspdk_accel_dsa.so 00:09:53.416 CC module/keyring/linux/keyring.o 00:09:53.416 LIB libspdk_scheduler_dpdk_governor.a 00:09:53.416 LIB libspdk_sock_uring.a 00:09:53.416 SO libspdk_scheduler_dpdk_governor.so.4.0 00:09:53.416 SO libspdk_sock_uring.so.5.0 00:09:53.416 LIB libspdk_fsdev_aio.a 00:09:53.416 LIB libspdk_scheduler_gscheduler.a 00:09:53.416 LIB libspdk_sock_posix.a 00:09:53.416 SO libspdk_fsdev_aio.so.1.0 00:09:53.416 SO libspdk_scheduler_gscheduler.so.4.0 00:09:53.416 SYMLINK libspdk_scheduler_dpdk_governor.so 00:09:53.416 CC module/keyring/linux/keyring_rpc.o 00:09:53.416 CC module/bdev/delay/vbdev_delay.o 00:09:53.416 SO libspdk_sock_posix.so.6.0 00:09:53.416 SYMLINK libspdk_sock_uring.so 00:09:53.416 CC module/accel/iaa/accel_iaa_rpc.o 00:09:53.416 CC module/bdev/delay/vbdev_delay_rpc.o 00:09:53.674 SYMLINK libspdk_scheduler_gscheduler.so 00:09:53.674 CC module/bdev/error/vbdev_error.o 00:09:53.674 CC module/bdev/gpt/gpt.o 00:09:53.674 SYMLINK libspdk_fsdev_aio.so 00:09:53.674 SYMLINK libspdk_sock_posix.so 00:09:53.674 CC module/bdev/gpt/vbdev_gpt.o 00:09:53.674 LIB libspdk_keyring_linux.a 00:09:53.674 SO libspdk_keyring_linux.so.1.0 00:09:53.674 LIB libspdk_accel_iaa.a 00:09:53.674 SO libspdk_accel_iaa.so.3.0 00:09:53.674 CC module/bdev/lvol/vbdev_lvol.o 00:09:53.674 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:09:53.674 CC module/bdev/malloc/bdev_malloc.o 00:09:53.674 SYMLINK libspdk_keyring_linux.so 00:09:53.674 CC module/bdev/error/vbdev_error_rpc.o 00:09:53.674 SYMLINK libspdk_accel_iaa.so 00:09:53.674 CC module/bdev/malloc/bdev_malloc_rpc.o 00:09:53.674 CC module/blobfs/bdev/blobfs_bdev.o 00:09:53.931 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:09:53.931 LIB libspdk_bdev_delay.a 00:09:53.931 LIB libspdk_bdev_gpt.a 00:09:53.931 SO libspdk_bdev_delay.so.6.0 00:09:53.931 CC module/bdev/null/bdev_null.o 00:09:53.931 LIB libspdk_bdev_error.a 00:09:53.931 SO libspdk_bdev_gpt.so.6.0 00:09:53.931 SO libspdk_bdev_error.so.6.0 00:09:53.931 SYMLINK libspdk_bdev_delay.so 00:09:53.931 SYMLINK libspdk_bdev_gpt.so 00:09:53.931 SYMLINK libspdk_bdev_error.so 00:09:53.931 LIB libspdk_blobfs_bdev.a 00:09:53.931 SO libspdk_blobfs_bdev.so.6.0 00:09:53.931 LIB libspdk_bdev_malloc.a 00:09:53.931 CC module/bdev/nvme/bdev_nvme.o 00:09:53.931 CC module/bdev/passthru/vbdev_passthru.o 00:09:53.931 SO libspdk_bdev_malloc.so.6.0 00:09:53.931 CC module/bdev/raid/bdev_raid.o 00:09:53.931 SYMLINK libspdk_blobfs_bdev.so 00:09:54.189 CC module/bdev/split/vbdev_split.o 00:09:54.189 CC module/bdev/null/bdev_null_rpc.o 00:09:54.189 CC module/bdev/zone_block/vbdev_zone_block.o 00:09:54.189 LIB libspdk_bdev_lvol.a 00:09:54.189 SYMLINK libspdk_bdev_malloc.so 00:09:54.189 CC module/bdev/nvme/bdev_nvme_rpc.o 00:09:54.189 SO libspdk_bdev_lvol.so.6.0 00:09:54.189 CC module/bdev/uring/bdev_uring.o 00:09:54.189 CC module/bdev/aio/bdev_aio.o 00:09:54.189 SYMLINK libspdk_bdev_lvol.so 00:09:54.189 CC module/bdev/uring/bdev_uring_rpc.o 00:09:54.189 LIB libspdk_bdev_null.a 00:09:54.189 SO libspdk_bdev_null.so.6.0 00:09:54.189 CC module/bdev/split/vbdev_split_rpc.o 00:09:54.189 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:09:54.189 SYMLINK libspdk_bdev_null.so 00:09:54.447 CC module/bdev/raid/bdev_raid_rpc.o 00:09:54.447 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:09:54.447 LIB libspdk_bdev_passthru.a 00:09:54.447 LIB libspdk_bdev_split.a 00:09:54.447 SO libspdk_bdev_passthru.so.6.0 00:09:54.447 SO libspdk_bdev_split.so.6.0 00:09:54.447 LIB libspdk_bdev_uring.a 00:09:54.447 CC module/bdev/aio/bdev_aio_rpc.o 00:09:54.447 SO libspdk_bdev_uring.so.6.0 00:09:54.447 CC module/bdev/ftl/bdev_ftl.o 00:09:54.447 SYMLINK libspdk_bdev_passthru.so 00:09:54.447 SYMLINK libspdk_bdev_split.so 00:09:54.447 CC module/bdev/ftl/bdev_ftl_rpc.o 00:09:54.447 LIB libspdk_bdev_zone_block.a 00:09:54.447 SYMLINK libspdk_bdev_uring.so 00:09:54.447 CC module/bdev/raid/bdev_raid_sb.o 00:09:54.447 SO libspdk_bdev_zone_block.so.6.0 00:09:54.706 SYMLINK libspdk_bdev_zone_block.so 00:09:54.706 CC module/bdev/raid/raid0.o 00:09:54.706 LIB libspdk_bdev_aio.a 00:09:54.706 CC module/bdev/iscsi/bdev_iscsi.o 00:09:54.706 CC module/bdev/nvme/nvme_rpc.o 00:09:54.706 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:09:54.706 CC module/bdev/virtio/bdev_virtio_scsi.o 00:09:54.706 SO libspdk_bdev_aio.so.6.0 00:09:54.706 SYMLINK libspdk_bdev_aio.so 00:09:54.706 CC module/bdev/nvme/bdev_mdns_client.o 00:09:54.706 LIB libspdk_bdev_ftl.a 00:09:54.706 SO libspdk_bdev_ftl.so.6.0 00:09:54.706 CC module/bdev/raid/raid1.o 00:09:54.706 CC module/bdev/raid/concat.o 00:09:54.706 SYMLINK libspdk_bdev_ftl.so 00:09:54.706 CC module/bdev/nvme/vbdev_opal.o 00:09:54.967 CC module/bdev/virtio/bdev_virtio_blk.o 00:09:54.967 CC module/bdev/virtio/bdev_virtio_rpc.o 00:09:54.967 CC module/bdev/nvme/vbdev_opal_rpc.o 00:09:54.967 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:09:54.967 LIB libspdk_bdev_iscsi.a 00:09:54.967 SO libspdk_bdev_iscsi.so.6.0 00:09:54.967 SYMLINK libspdk_bdev_iscsi.so 00:09:54.967 LIB libspdk_bdev_raid.a 00:09:54.967 LIB libspdk_bdev_virtio.a 00:09:55.227 SO libspdk_bdev_raid.so.6.0 00:09:55.227 SO libspdk_bdev_virtio.so.6.0 00:09:55.227 SYMLINK libspdk_bdev_virtio.so 00:09:55.227 SYMLINK libspdk_bdev_raid.so 00:09:56.163 LIB libspdk_bdev_nvme.a 00:09:56.163 SO libspdk_bdev_nvme.so.7.1 00:09:56.163 SYMLINK libspdk_bdev_nvme.so 00:09:56.730 CC module/event/subsystems/vmd/vmd.o 00:09:56.730 CC module/event/subsystems/keyring/keyring.o 00:09:56.730 CC module/event/subsystems/vmd/vmd_rpc.o 00:09:56.730 CC module/event/subsystems/scheduler/scheduler.o 00:09:56.730 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:09:56.730 CC module/event/subsystems/iobuf/iobuf.o 00:09:56.730 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:09:56.730 CC module/event/subsystems/sock/sock.o 00:09:56.730 CC module/event/subsystems/fsdev/fsdev.o 00:09:56.730 LIB libspdk_event_keyring.a 00:09:56.730 LIB libspdk_event_vhost_blk.a 00:09:56.730 LIB libspdk_event_vmd.a 00:09:56.730 SO libspdk_event_keyring.so.1.0 00:09:56.730 LIB libspdk_event_fsdev.a 00:09:56.730 SO libspdk_event_vhost_blk.so.3.0 00:09:56.730 LIB libspdk_event_sock.a 00:09:56.730 SO libspdk_event_vmd.so.6.0 00:09:56.730 SO libspdk_event_fsdev.so.1.0 00:09:56.730 SO libspdk_event_sock.so.5.0 00:09:56.730 SYMLINK libspdk_event_keyring.so 00:09:56.730 SYMLINK libspdk_event_vhost_blk.so 00:09:56.730 LIB libspdk_event_iobuf.a 00:09:56.730 LIB libspdk_event_scheduler.a 00:09:56.730 SYMLINK libspdk_event_vmd.so 00:09:56.730 SYMLINK libspdk_event_fsdev.so 00:09:56.730 SYMLINK libspdk_event_sock.so 00:09:56.730 SO libspdk_event_iobuf.so.3.0 00:09:56.730 SO libspdk_event_scheduler.so.4.0 00:09:56.989 SYMLINK libspdk_event_iobuf.so 00:09:56.989 SYMLINK libspdk_event_scheduler.so 00:09:57.249 CC module/event/subsystems/accel/accel.o 00:09:57.249 LIB libspdk_event_accel.a 00:09:57.249 SO libspdk_event_accel.so.6.0 00:09:57.249 SYMLINK libspdk_event_accel.so 00:09:57.508 CC module/event/subsystems/bdev/bdev.o 00:09:57.768 LIB libspdk_event_bdev.a 00:09:57.768 SO libspdk_event_bdev.so.6.0 00:09:57.768 SYMLINK libspdk_event_bdev.so 00:09:58.029 CC module/event/subsystems/ublk/ublk.o 00:09:58.029 CC module/event/subsystems/scsi/scsi.o 00:09:58.029 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:09:58.029 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:09:58.029 CC module/event/subsystems/nbd/nbd.o 00:09:58.029 LIB libspdk_event_nbd.a 00:09:58.029 LIB libspdk_event_ublk.a 00:09:58.029 SO libspdk_event_nbd.so.6.0 00:09:58.029 SO libspdk_event_ublk.so.3.0 00:09:58.029 LIB libspdk_event_scsi.a 00:09:58.289 SO libspdk_event_scsi.so.6.0 00:09:58.289 SYMLINK libspdk_event_nbd.so 00:09:58.289 SYMLINK libspdk_event_ublk.so 00:09:58.289 LIB libspdk_event_nvmf.a 00:09:58.289 SO libspdk_event_nvmf.so.6.0 00:09:58.289 SYMLINK libspdk_event_scsi.so 00:09:58.289 SYMLINK libspdk_event_nvmf.so 00:09:58.550 CC module/event/subsystems/iscsi/iscsi.o 00:09:58.550 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:09:58.550 LIB libspdk_event_iscsi.a 00:09:58.550 LIB libspdk_event_vhost_scsi.a 00:09:58.550 SO libspdk_event_vhost_scsi.so.3.0 00:09:58.550 SO libspdk_event_iscsi.so.6.0 00:09:58.550 SYMLINK libspdk_event_iscsi.so 00:09:58.550 SYMLINK libspdk_event_vhost_scsi.so 00:09:58.810 SO libspdk.so.6.0 00:09:58.810 SYMLINK libspdk.so 00:09:59.069 CXX app/trace/trace.o 00:09:59.069 CC app/trace_record/trace_record.o 00:09:59.069 CC examples/interrupt_tgt/interrupt_tgt.o 00:09:59.069 CC app/nvmf_tgt/nvmf_main.o 00:09:59.069 CC app/iscsi_tgt/iscsi_tgt.o 00:09:59.069 CC examples/util/zipf/zipf.o 00:09:59.069 CC examples/ioat/perf/perf.o 00:09:59.069 CC test/thread/poller_perf/poller_perf.o 00:09:59.069 CC test/dma/test_dma/test_dma.o 00:09:59.069 CC test/app/bdev_svc/bdev_svc.o 00:09:59.069 LINK interrupt_tgt 00:09:59.069 LINK zipf 00:09:59.069 LINK spdk_trace_record 00:09:59.069 LINK nvmf_tgt 00:09:59.326 LINK poller_perf 00:09:59.326 LINK iscsi_tgt 00:09:59.326 LINK ioat_perf 00:09:59.326 LINK spdk_trace 00:09:59.326 LINK bdev_svc 00:09:59.326 CC test/app/jsoncat/jsoncat.o 00:09:59.326 CC test/app/histogram_perf/histogram_perf.o 00:09:59.326 CC test/app/stub/stub.o 00:09:59.584 CC app/spdk_tgt/spdk_tgt.o 00:09:59.584 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:09:59.584 CC examples/ioat/verify/verify.o 00:09:59.584 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:09:59.584 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:09:59.584 LINK test_dma 00:09:59.584 LINK histogram_perf 00:09:59.584 LINK jsoncat 00:09:59.584 LINK stub 00:09:59.584 CC examples/thread/thread/thread_ex.o 00:09:59.584 LINK spdk_tgt 00:09:59.584 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:09:59.584 LINK verify 00:09:59.841 TEST_HEADER include/spdk/accel.h 00:09:59.842 TEST_HEADER include/spdk/accel_module.h 00:09:59.842 TEST_HEADER include/spdk/assert.h 00:09:59.842 TEST_HEADER include/spdk/barrier.h 00:09:59.842 TEST_HEADER include/spdk/base64.h 00:09:59.842 TEST_HEADER include/spdk/bdev.h 00:09:59.842 TEST_HEADER include/spdk/bdev_module.h 00:09:59.842 TEST_HEADER include/spdk/bdev_zone.h 00:09:59.842 TEST_HEADER include/spdk/bit_array.h 00:09:59.842 TEST_HEADER include/spdk/bit_pool.h 00:09:59.842 TEST_HEADER include/spdk/blob_bdev.h 00:09:59.842 TEST_HEADER include/spdk/blobfs_bdev.h 00:09:59.842 TEST_HEADER include/spdk/blobfs.h 00:09:59.842 TEST_HEADER include/spdk/blob.h 00:09:59.842 TEST_HEADER include/spdk/conf.h 00:09:59.842 TEST_HEADER include/spdk/config.h 00:09:59.842 TEST_HEADER include/spdk/cpuset.h 00:09:59.842 TEST_HEADER include/spdk/crc16.h 00:09:59.842 TEST_HEADER include/spdk/crc32.h 00:09:59.842 TEST_HEADER include/spdk/crc64.h 00:09:59.842 TEST_HEADER include/spdk/dif.h 00:09:59.842 TEST_HEADER include/spdk/dma.h 00:09:59.842 TEST_HEADER include/spdk/endian.h 00:09:59.842 TEST_HEADER include/spdk/env_dpdk.h 00:09:59.842 TEST_HEADER include/spdk/env.h 00:09:59.842 TEST_HEADER include/spdk/event.h 00:09:59.842 TEST_HEADER include/spdk/fd_group.h 00:09:59.842 TEST_HEADER include/spdk/fd.h 00:09:59.842 TEST_HEADER include/spdk/file.h 00:09:59.842 TEST_HEADER include/spdk/fsdev.h 00:09:59.842 TEST_HEADER include/spdk/fsdev_module.h 00:09:59.842 TEST_HEADER include/spdk/ftl.h 00:09:59.842 TEST_HEADER include/spdk/fuse_dispatcher.h 00:09:59.842 TEST_HEADER include/spdk/gpt_spec.h 00:09:59.842 TEST_HEADER include/spdk/hexlify.h 00:09:59.842 TEST_HEADER include/spdk/histogram_data.h 00:09:59.842 TEST_HEADER include/spdk/idxd.h 00:09:59.842 TEST_HEADER include/spdk/idxd_spec.h 00:09:59.842 LINK nvme_fuzz 00:09:59.842 TEST_HEADER include/spdk/init.h 00:09:59.842 TEST_HEADER include/spdk/ioat.h 00:09:59.842 TEST_HEADER include/spdk/ioat_spec.h 00:09:59.842 TEST_HEADER include/spdk/iscsi_spec.h 00:09:59.842 TEST_HEADER include/spdk/json.h 00:09:59.842 TEST_HEADER include/spdk/jsonrpc.h 00:09:59.842 TEST_HEADER include/spdk/keyring.h 00:09:59.842 TEST_HEADER include/spdk/keyring_module.h 00:09:59.842 TEST_HEADER include/spdk/likely.h 00:09:59.842 TEST_HEADER include/spdk/log.h 00:09:59.842 TEST_HEADER include/spdk/lvol.h 00:09:59.842 TEST_HEADER include/spdk/md5.h 00:09:59.842 TEST_HEADER include/spdk/memory.h 00:09:59.842 TEST_HEADER include/spdk/mmio.h 00:09:59.842 TEST_HEADER include/spdk/nbd.h 00:09:59.842 TEST_HEADER include/spdk/net.h 00:09:59.842 TEST_HEADER include/spdk/notify.h 00:09:59.842 TEST_HEADER include/spdk/nvme.h 00:09:59.842 TEST_HEADER include/spdk/nvme_intel.h 00:09:59.842 TEST_HEADER include/spdk/nvme_ocssd.h 00:09:59.842 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:09:59.842 TEST_HEADER include/spdk/nvme_spec.h 00:09:59.842 TEST_HEADER include/spdk/nvme_zns.h 00:09:59.842 CC examples/sock/hello_world/hello_sock.o 00:09:59.842 TEST_HEADER include/spdk/nvmf_cmd.h 00:09:59.842 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:09:59.842 TEST_HEADER include/spdk/nvmf.h 00:09:59.842 TEST_HEADER include/spdk/nvmf_spec.h 00:09:59.842 TEST_HEADER include/spdk/nvmf_transport.h 00:09:59.842 TEST_HEADER include/spdk/opal.h 00:09:59.842 TEST_HEADER include/spdk/opal_spec.h 00:09:59.842 CC examples/vmd/lsvmd/lsvmd.o 00:09:59.842 TEST_HEADER include/spdk/pci_ids.h 00:09:59.842 TEST_HEADER include/spdk/pipe.h 00:09:59.842 TEST_HEADER include/spdk/queue.h 00:09:59.842 TEST_HEADER include/spdk/reduce.h 00:09:59.842 TEST_HEADER include/spdk/rpc.h 00:09:59.842 TEST_HEADER include/spdk/scheduler.h 00:09:59.842 TEST_HEADER include/spdk/scsi.h 00:09:59.842 TEST_HEADER include/spdk/scsi_spec.h 00:09:59.842 LINK thread 00:09:59.842 TEST_HEADER include/spdk/sock.h 00:09:59.842 TEST_HEADER include/spdk/stdinc.h 00:09:59.842 CC test/env/mem_callbacks/mem_callbacks.o 00:09:59.842 TEST_HEADER include/spdk/string.h 00:09:59.842 TEST_HEADER include/spdk/thread.h 00:09:59.842 TEST_HEADER include/spdk/trace.h 00:09:59.842 TEST_HEADER include/spdk/trace_parser.h 00:09:59.842 CC examples/vmd/led/led.o 00:09:59.842 TEST_HEADER include/spdk/tree.h 00:09:59.842 TEST_HEADER include/spdk/ublk.h 00:09:59.842 TEST_HEADER include/spdk/util.h 00:09:59.842 TEST_HEADER include/spdk/uuid.h 00:09:59.842 TEST_HEADER include/spdk/version.h 00:09:59.842 TEST_HEADER include/spdk/vfio_user_pci.h 00:09:59.842 CC app/spdk_lspci/spdk_lspci.o 00:09:59.842 TEST_HEADER include/spdk/vfio_user_spec.h 00:09:59.842 TEST_HEADER include/spdk/vhost.h 00:09:59.842 TEST_HEADER include/spdk/vmd.h 00:09:59.842 TEST_HEADER include/spdk/xor.h 00:09:59.842 TEST_HEADER include/spdk/zipf.h 00:09:59.842 CXX test/cpp_headers/accel.o 00:10:00.139 LINK lsvmd 00:10:00.139 CC app/spdk_nvme_perf/perf.o 00:10:00.139 LINK vhost_fuzz 00:10:00.139 LINK spdk_lspci 00:10:00.139 LINK led 00:10:00.139 CXX test/cpp_headers/accel_module.o 00:10:00.139 LINK hello_sock 00:10:00.139 CXX test/cpp_headers/assert.o 00:10:00.139 CC app/spdk_nvme_identify/identify.o 00:10:00.139 CXX test/cpp_headers/barrier.o 00:10:00.139 CC app/spdk_nvme_discover/discovery_aer.o 00:10:00.139 CC test/event/event_perf/event_perf.o 00:10:00.139 CC test/event/reactor/reactor.o 00:10:00.396 CXX test/cpp_headers/base64.o 00:10:00.396 CC test/event/reactor_perf/reactor_perf.o 00:10:00.396 CC examples/idxd/perf/perf.o 00:10:00.396 LINK event_perf 00:10:00.396 LINK mem_callbacks 00:10:00.396 LINK spdk_nvme_discover 00:10:00.396 LINK reactor 00:10:00.396 LINK reactor_perf 00:10:00.396 CXX test/cpp_headers/bdev.o 00:10:00.396 CXX test/cpp_headers/bdev_module.o 00:10:00.396 CC test/env/vtophys/vtophys.o 00:10:00.653 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:10:00.653 CC test/env/memory/memory_ut.o 00:10:00.653 LINK spdk_nvme_identify 00:10:00.653 CC test/event/app_repeat/app_repeat.o 00:10:00.653 LINK spdk_nvme_perf 00:10:00.653 LINK vtophys 00:10:00.653 CXX test/cpp_headers/bdev_zone.o 00:10:00.653 LINK idxd_perf 00:10:00.653 LINK env_dpdk_post_init 00:10:00.653 LINK app_repeat 00:10:00.911 CC examples/fsdev/hello_world/hello_fsdev.o 00:10:00.911 LINK iscsi_fuzz 00:10:00.911 CC app/spdk_top/spdk_top.o 00:10:00.911 CXX test/cpp_headers/bit_array.o 00:10:00.911 CXX test/cpp_headers/bit_pool.o 00:10:00.911 CC test/event/scheduler/scheduler.o 00:10:00.911 CC test/env/pci/pci_ut.o 00:10:00.911 CC examples/accel/perf/accel_perf.o 00:10:00.911 CXX test/cpp_headers/blob_bdev.o 00:10:00.911 LINK hello_fsdev 00:10:01.169 LINK scheduler 00:10:01.169 CC examples/blob/hello_world/hello_blob.o 00:10:01.169 CC examples/nvme/hello_world/hello_world.o 00:10:01.169 CC test/nvme/aer/aer.o 00:10:01.169 CXX test/cpp_headers/blobfs_bdev.o 00:10:01.169 LINK pci_ut 00:10:01.169 CC test/nvme/reset/reset.o 00:10:01.169 CC test/nvme/sgl/sgl.o 00:10:01.169 LINK hello_blob 00:10:01.429 LINK accel_perf 00:10:01.429 CXX test/cpp_headers/blobfs.o 00:10:01.429 LINK hello_world 00:10:01.429 LINK aer 00:10:01.429 CXX test/cpp_headers/blob.o 00:10:01.429 CC test/nvme/e2edp/nvme_dp.o 00:10:01.429 LINK reset 00:10:01.429 LINK memory_ut 00:10:01.429 LINK spdk_top 00:10:01.429 CC examples/nvme/reconnect/reconnect.o 00:10:01.429 LINK sgl 00:10:01.429 CC test/rpc_client/rpc_client_test.o 00:10:01.429 CXX test/cpp_headers/conf.o 00:10:01.690 CC examples/blob/cli/blobcli.o 00:10:01.690 CXX test/cpp_headers/config.o 00:10:01.690 CC test/nvme/overhead/overhead.o 00:10:01.690 LINK rpc_client_test 00:10:01.690 CXX test/cpp_headers/cpuset.o 00:10:01.690 CC test/nvme/err_injection/err_injection.o 00:10:01.690 LINK nvme_dp 00:10:01.690 CC examples/nvme/nvme_manage/nvme_manage.o 00:10:01.690 CC test/nvme/startup/startup.o 00:10:01.690 CC app/vhost/vhost.o 00:10:01.951 LINK reconnect 00:10:01.951 CXX test/cpp_headers/crc16.o 00:10:01.951 CXX test/cpp_headers/crc32.o 00:10:01.951 LINK err_injection 00:10:01.951 LINK startup 00:10:01.951 LINK overhead 00:10:01.951 LINK vhost 00:10:01.951 CXX test/cpp_headers/crc64.o 00:10:01.951 LINK blobcli 00:10:01.951 CXX test/cpp_headers/dif.o 00:10:01.951 CC examples/bdev/hello_world/hello_bdev.o 00:10:01.951 CXX test/cpp_headers/dma.o 00:10:02.209 CC test/nvme/reserve/reserve.o 00:10:02.209 LINK nvme_manage 00:10:02.209 CC test/blobfs/mkfs/mkfs.o 00:10:02.209 CXX test/cpp_headers/endian.o 00:10:02.209 CC test/accel/dif/dif.o 00:10:02.209 CC test/nvme/simple_copy/simple_copy.o 00:10:02.209 CC app/spdk_dd/spdk_dd.o 00:10:02.209 LINK hello_bdev 00:10:02.209 CC test/nvme/connect_stress/connect_stress.o 00:10:02.209 CC examples/nvme/arbitration/arbitration.o 00:10:02.209 CXX test/cpp_headers/env_dpdk.o 00:10:02.466 LINK reserve 00:10:02.466 LINK mkfs 00:10:02.466 LINK connect_stress 00:10:02.466 LINK simple_copy 00:10:02.466 CXX test/cpp_headers/env.o 00:10:02.466 CC app/fio/nvme/fio_plugin.o 00:10:02.466 CXX test/cpp_headers/event.o 00:10:02.466 CC examples/bdev/bdevperf/bdevperf.o 00:10:02.466 CXX test/cpp_headers/fd_group.o 00:10:02.466 LINK arbitration 00:10:02.725 CC test/nvme/boot_partition/boot_partition.o 00:10:02.725 CC examples/nvme/hotplug/hotplug.o 00:10:02.725 CXX test/cpp_headers/fd.o 00:10:02.725 CXX test/cpp_headers/file.o 00:10:02.725 CC test/nvme/fused_ordering/fused_ordering.o 00:10:02.725 LINK spdk_dd 00:10:02.725 CC test/nvme/compliance/nvme_compliance.o 00:10:02.725 LINK dif 00:10:02.725 LINK boot_partition 00:10:02.725 LINK hotplug 00:10:02.983 LINK fused_ordering 00:10:02.983 CXX test/cpp_headers/fsdev.o 00:10:02.983 LINK spdk_nvme 00:10:02.983 CC examples/nvme/cmb_copy/cmb_copy.o 00:10:02.983 CC app/fio/bdev/fio_plugin.o 00:10:02.983 CC test/lvol/esnap/esnap.o 00:10:02.983 CC test/nvme/doorbell_aers/doorbell_aers.o 00:10:02.983 LINK nvme_compliance 00:10:02.983 CXX test/cpp_headers/fsdev_module.o 00:10:02.983 CC examples/nvme/abort/abort.o 00:10:02.983 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:10:02.983 CC test/nvme/fdp/fdp.o 00:10:03.241 LINK cmb_copy 00:10:03.241 LINK doorbell_aers 00:10:03.241 LINK bdevperf 00:10:03.241 CXX test/cpp_headers/ftl.o 00:10:03.241 LINK pmr_persistence 00:10:03.241 CC test/nvme/cuse/cuse.o 00:10:03.241 CXX test/cpp_headers/fuse_dispatcher.o 00:10:03.241 CXX test/cpp_headers/gpt_spec.o 00:10:03.241 CXX test/cpp_headers/hexlify.o 00:10:03.241 LINK abort 00:10:03.241 LINK fdp 00:10:03.241 CXX test/cpp_headers/histogram_data.o 00:10:03.499 CXX test/cpp_headers/idxd.o 00:10:03.499 LINK spdk_bdev 00:10:03.499 CC test/bdev/bdevio/bdevio.o 00:10:03.499 CXX test/cpp_headers/idxd_spec.o 00:10:03.499 CXX test/cpp_headers/init.o 00:10:03.499 CXX test/cpp_headers/ioat.o 00:10:03.499 CXX test/cpp_headers/ioat_spec.o 00:10:03.499 CXX test/cpp_headers/iscsi_spec.o 00:10:03.499 CXX test/cpp_headers/json.o 00:10:03.499 CXX test/cpp_headers/jsonrpc.o 00:10:03.499 CXX test/cpp_headers/keyring.o 00:10:03.499 CXX test/cpp_headers/keyring_module.o 00:10:03.499 CXX test/cpp_headers/likely.o 00:10:03.499 CXX test/cpp_headers/log.o 00:10:03.759 CC examples/nvmf/nvmf/nvmf.o 00:10:03.759 CXX test/cpp_headers/lvol.o 00:10:03.759 CXX test/cpp_headers/md5.o 00:10:03.759 CXX test/cpp_headers/memory.o 00:10:03.759 CXX test/cpp_headers/mmio.o 00:10:03.759 CXX test/cpp_headers/nbd.o 00:10:03.759 CXX test/cpp_headers/net.o 00:10:03.759 CXX test/cpp_headers/notify.o 00:10:03.759 CXX test/cpp_headers/nvme.o 00:10:03.759 LINK bdevio 00:10:03.759 CXX test/cpp_headers/nvme_intel.o 00:10:04.018 CXX test/cpp_headers/nvme_ocssd.o 00:10:04.018 CXX test/cpp_headers/nvme_ocssd_spec.o 00:10:04.018 CXX test/cpp_headers/nvme_spec.o 00:10:04.018 CXX test/cpp_headers/nvme_zns.o 00:10:04.018 LINK nvmf 00:10:04.018 CXX test/cpp_headers/nvmf_cmd.o 00:10:04.018 CXX test/cpp_headers/nvmf_fc_spec.o 00:10:04.018 CXX test/cpp_headers/nvmf.o 00:10:04.018 CXX test/cpp_headers/nvmf_spec.o 00:10:04.018 CXX test/cpp_headers/nvmf_transport.o 00:10:04.018 CXX test/cpp_headers/opal.o 00:10:04.018 CXX test/cpp_headers/opal_spec.o 00:10:04.018 CXX test/cpp_headers/pci_ids.o 00:10:04.018 CXX test/cpp_headers/pipe.o 00:10:04.018 CXX test/cpp_headers/queue.o 00:10:04.276 CXX test/cpp_headers/reduce.o 00:10:04.276 CXX test/cpp_headers/rpc.o 00:10:04.276 CXX test/cpp_headers/scheduler.o 00:10:04.276 CXX test/cpp_headers/scsi.o 00:10:04.276 CXX test/cpp_headers/scsi_spec.o 00:10:04.276 CXX test/cpp_headers/sock.o 00:10:04.276 CXX test/cpp_headers/stdinc.o 00:10:04.276 CXX test/cpp_headers/string.o 00:10:04.276 LINK cuse 00:10:04.276 CXX test/cpp_headers/thread.o 00:10:04.276 CXX test/cpp_headers/trace.o 00:10:04.276 CXX test/cpp_headers/trace_parser.o 00:10:04.276 CXX test/cpp_headers/tree.o 00:10:04.276 CXX test/cpp_headers/ublk.o 00:10:04.276 CXX test/cpp_headers/util.o 00:10:04.276 CXX test/cpp_headers/uuid.o 00:10:04.276 CXX test/cpp_headers/version.o 00:10:04.276 CXX test/cpp_headers/vfio_user_pci.o 00:10:04.536 CXX test/cpp_headers/vfio_user_spec.o 00:10:04.536 CXX test/cpp_headers/vhost.o 00:10:04.536 CXX test/cpp_headers/vmd.o 00:10:04.536 CXX test/cpp_headers/xor.o 00:10:04.536 CXX test/cpp_headers/zipf.o 00:10:07.061 LINK esnap 00:10:07.319 00:10:07.319 real 1m5.617s 00:10:07.319 user 6m4.774s 00:10:07.319 sys 1m6.395s 00:10:07.319 14:36:16 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:10:07.319 ************************************ 00:10:07.319 END TEST make 00:10:07.319 ************************************ 00:10:07.319 14:36:16 make -- common/autotest_common.sh@10 -- $ set +x 00:10:07.319 14:36:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:10:07.319 14:36:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:10:07.319 14:36:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:10:07.319 14:36:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:07.319 14:36:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:10:07.319 14:36:16 -- pm/common@44 -- $ pid=5045 00:10:07.319 14:36:16 -- pm/common@50 -- $ kill -TERM 5045 00:10:07.319 14:36:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:07.319 14:36:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:10:07.319 14:36:16 -- pm/common@44 -- $ pid=5046 00:10:07.319 14:36:16 -- pm/common@50 -- $ kill -TERM 5046 00:10:07.319 14:36:16 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:10:07.319 14:36:16 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:10:07.319 14:36:16 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:07.319 14:36:16 -- common/autotest_common.sh@1691 -- # lcov --version 00:10:07.319 14:36:16 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:07.578 14:36:16 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:07.578 14:36:16 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.578 14:36:16 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.578 14:36:16 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.578 14:36:16 -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.578 14:36:16 -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.578 14:36:16 -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.578 14:36:16 -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.578 14:36:16 -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.578 14:36:16 -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.578 14:36:16 -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.578 14:36:16 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.578 14:36:16 -- scripts/common.sh@344 -- # case "$op" in 00:10:07.578 14:36:16 -- scripts/common.sh@345 -- # : 1 00:10:07.578 14:36:16 -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.578 14:36:16 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.578 14:36:16 -- scripts/common.sh@365 -- # decimal 1 00:10:07.578 14:36:16 -- scripts/common.sh@353 -- # local d=1 00:10:07.578 14:36:16 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.578 14:36:16 -- scripts/common.sh@355 -- # echo 1 00:10:07.578 14:36:16 -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.578 14:36:16 -- scripts/common.sh@366 -- # decimal 2 00:10:07.578 14:36:16 -- scripts/common.sh@353 -- # local d=2 00:10:07.578 14:36:16 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.578 14:36:16 -- scripts/common.sh@355 -- # echo 2 00:10:07.578 14:36:16 -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.578 14:36:16 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.578 14:36:16 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.578 14:36:16 -- scripts/common.sh@368 -- # return 0 00:10:07.578 14:36:16 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.578 14:36:16 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:07.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.578 --rc genhtml_branch_coverage=1 00:10:07.578 --rc genhtml_function_coverage=1 00:10:07.578 --rc genhtml_legend=1 00:10:07.578 --rc geninfo_all_blocks=1 00:10:07.578 --rc geninfo_unexecuted_blocks=1 00:10:07.578 00:10:07.578 ' 00:10:07.578 14:36:16 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:07.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.578 --rc genhtml_branch_coverage=1 00:10:07.578 --rc genhtml_function_coverage=1 00:10:07.578 --rc genhtml_legend=1 00:10:07.578 --rc geninfo_all_blocks=1 00:10:07.578 --rc geninfo_unexecuted_blocks=1 00:10:07.578 00:10:07.578 ' 00:10:07.578 14:36:16 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:07.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.578 --rc genhtml_branch_coverage=1 00:10:07.578 --rc genhtml_function_coverage=1 00:10:07.578 --rc genhtml_legend=1 00:10:07.578 --rc geninfo_all_blocks=1 00:10:07.578 --rc geninfo_unexecuted_blocks=1 00:10:07.578 00:10:07.578 ' 00:10:07.578 14:36:16 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:07.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.578 --rc genhtml_branch_coverage=1 00:10:07.578 --rc genhtml_function_coverage=1 00:10:07.578 --rc genhtml_legend=1 00:10:07.578 --rc geninfo_all_blocks=1 00:10:07.578 --rc geninfo_unexecuted_blocks=1 00:10:07.578 00:10:07.578 ' 00:10:07.578 14:36:16 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:07.578 14:36:16 -- nvmf/common.sh@7 -- # uname -s 00:10:07.578 14:36:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.578 14:36:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.578 14:36:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.578 14:36:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.578 14:36:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.578 14:36:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.578 14:36:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.579 14:36:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.579 14:36:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.579 14:36:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.579 14:36:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:10:07.579 14:36:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:10:07.579 14:36:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.579 14:36:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.579 14:36:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:07.579 14:36:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.579 14:36:16 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:07.579 14:36:16 -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.579 14:36:16 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.579 14:36:16 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.579 14:36:16 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.579 14:36:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.579 14:36:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.579 14:36:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.579 14:36:16 -- paths/export.sh@5 -- # export PATH 00:10:07.579 14:36:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.579 14:36:16 -- nvmf/common.sh@51 -- # : 0 00:10:07.579 14:36:16 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.579 14:36:16 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.579 14:36:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.579 14:36:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.579 14:36:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.579 14:36:16 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.579 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.579 14:36:16 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.579 14:36:16 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.579 14:36:16 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.579 14:36:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:10:07.579 14:36:16 -- spdk/autotest.sh@32 -- # uname -s 00:10:07.579 14:36:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:10:07.579 14:36:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:10:07.579 14:36:16 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:07.579 14:36:16 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:10:07.579 14:36:16 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:07.579 14:36:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:10:07.579 14:36:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:10:07.579 14:36:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:10:07.579 14:36:16 -- spdk/autotest.sh@48 -- # udevadm_pid=53811 00:10:07.579 14:36:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:10:07.579 14:36:16 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:10:07.579 14:36:16 -- pm/common@17 -- # local monitor 00:10:07.579 14:36:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:07.579 14:36:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:07.579 14:36:16 -- pm/common@25 -- # sleep 1 00:10:07.579 14:36:16 -- pm/common@21 -- # date +%s 00:10:07.579 14:36:16 -- pm/common@21 -- # date +%s 00:10:07.579 14:36:16 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730730976 00:10:07.579 14:36:16 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730730976 00:10:07.579 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730730976_collect-vmstat.pm.log 00:10:07.579 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730730976_collect-cpu-load.pm.log 00:10:08.528 14:36:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:10:08.528 14:36:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:10:08.528 14:36:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:08.528 14:36:17 -- common/autotest_common.sh@10 -- # set +x 00:10:08.528 14:36:17 -- spdk/autotest.sh@59 -- # create_test_list 00:10:08.528 14:36:17 -- common/autotest_common.sh@750 -- # xtrace_disable 00:10:08.528 14:36:17 -- common/autotest_common.sh@10 -- # set +x 00:10:08.528 14:36:17 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:10:08.528 14:36:17 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:10:08.528 14:36:17 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:10:08.528 14:36:17 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:10:08.528 14:36:17 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:10:08.528 14:36:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:10:08.528 14:36:17 -- common/autotest_common.sh@1455 -- # uname 00:10:08.528 14:36:17 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:10:08.528 14:36:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:10:08.528 14:36:17 -- common/autotest_common.sh@1475 -- # uname 00:10:08.528 14:36:17 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:10:08.528 14:36:17 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:10:08.528 14:36:17 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:10:08.788 lcov: LCOV version 1.15 00:10:08.788 14:36:17 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:10:23.660 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:10:23.660 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:10:38.579 14:36:45 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:10:38.579 14:36:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:38.579 14:36:45 -- common/autotest_common.sh@10 -- # set +x 00:10:38.580 14:36:45 -- spdk/autotest.sh@78 -- # rm -f 00:10:38.580 14:36:45 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:38.580 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:38.580 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:10:38.580 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:10:38.580 14:36:46 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:10:38.580 14:36:46 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:10:38.580 14:36:46 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:10:38.580 14:36:46 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:10:38.580 14:36:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:38.580 14:36:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:10:38.580 14:36:46 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:10:38.580 14:36:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:38.580 14:36:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:38.580 14:36:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:38.580 14:36:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:10:38.580 14:36:46 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:10:38.580 14:36:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:38.580 14:36:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:38.580 14:36:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:38.580 14:36:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:10:38.580 14:36:46 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:10:38.580 14:36:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:10:38.580 14:36:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:38.580 14:36:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:38.580 14:36:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:10:38.580 14:36:46 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:10:38.580 14:36:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:10:38.580 14:36:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:38.580 14:36:46 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:10:38.580 14:36:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:38.580 14:36:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:38.580 14:36:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:10:38.580 14:36:46 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:10:38.580 14:36:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:10:38.580 No valid GPT data, bailing 00:10:38.580 14:36:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:10:38.580 14:36:46 -- scripts/common.sh@394 -- # pt= 00:10:38.580 14:36:46 -- scripts/common.sh@395 -- # return 1 00:10:38.580 14:36:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:10:38.580 1+0 records in 00:10:38.580 1+0 records out 00:10:38.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00351052 s, 299 MB/s 00:10:38.580 14:36:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:38.580 14:36:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:38.580 14:36:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:10:38.580 14:36:46 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:10:38.580 14:36:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:10:38.580 No valid GPT data, bailing 00:10:38.580 14:36:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:10:38.580 14:36:46 -- scripts/common.sh@394 -- # pt= 00:10:38.580 14:36:46 -- scripts/common.sh@395 -- # return 1 00:10:38.580 14:36:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:10:38.580 1+0 records in 00:10:38.580 1+0 records out 00:10:38.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00312626 s, 335 MB/s 00:10:38.580 14:36:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:38.580 14:36:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:38.580 14:36:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:10:38.580 14:36:46 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:10:38.580 14:36:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:10:38.580 No valid GPT data, bailing 00:10:38.580 14:36:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:10:38.580 14:36:46 -- scripts/common.sh@394 -- # pt= 00:10:38.580 14:36:46 -- scripts/common.sh@395 -- # return 1 00:10:38.580 14:36:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:10:38.580 1+0 records in 00:10:38.580 1+0 records out 00:10:38.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00581951 s, 180 MB/s 00:10:38.580 14:36:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:38.580 14:36:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:38.580 14:36:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:10:38.580 14:36:46 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:10:38.580 14:36:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:10:38.580 No valid GPT data, bailing 00:10:38.580 14:36:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:10:38.580 14:36:46 -- scripts/common.sh@394 -- # pt= 00:10:38.580 14:36:46 -- scripts/common.sh@395 -- # return 1 00:10:38.580 14:36:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:10:38.580 1+0 records in 00:10:38.580 1+0 records out 00:10:38.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00342431 s, 306 MB/s 00:10:38.580 14:36:46 -- spdk/autotest.sh@105 -- # sync 00:10:38.580 14:36:46 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:10:38.580 14:36:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:10:38.580 14:36:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:10:39.144 14:36:48 -- spdk/autotest.sh@111 -- # uname -s 00:10:39.144 14:36:48 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:10:39.144 14:36:48 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:10:39.144 14:36:48 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:10:39.710 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:39.710 Hugepages 00:10:39.710 node hugesize free / total 00:10:39.710 node0 1048576kB 0 / 0 00:10:39.710 node0 2048kB 0 / 0 00:10:39.710 00:10:39.710 Type BDF Vendor Device NUMA Driver Device Block devices 00:10:39.710 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:10:39.710 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:10:39.968 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:10:39.968 14:36:48 -- spdk/autotest.sh@117 -- # uname -s 00:10:39.968 14:36:48 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:10:39.968 14:36:48 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:10:39.968 14:36:48 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:40.226 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:40.483 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:40.483 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:40.483 14:36:49 -- common/autotest_common.sh@1515 -- # sleep 1 00:10:41.440 14:36:50 -- common/autotest_common.sh@1516 -- # bdfs=() 00:10:41.440 14:36:50 -- common/autotest_common.sh@1516 -- # local bdfs 00:10:41.440 14:36:50 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:10:41.440 14:36:50 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:10:41.440 14:36:50 -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:41.440 14:36:50 -- common/autotest_common.sh@1496 -- # local bdfs 00:10:41.440 14:36:50 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:41.440 14:36:50 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:41.440 14:36:50 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:41.698 14:36:50 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:10:41.698 14:36:50 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:10:41.698 14:36:50 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:41.698 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:41.698 Waiting for block devices as requested 00:10:41.955 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:41.955 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:41.955 14:36:50 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:10:41.956 14:36:50 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:10:41.956 14:36:50 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:10:41.956 14:36:50 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:10:41.956 14:36:50 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:10:41.956 14:36:50 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:10:41.956 14:36:50 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:10:41.956 14:36:50 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:10:41.956 14:36:50 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:10:41.956 14:36:50 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:10:41.956 14:36:50 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:10:41.956 14:36:50 -- common/autotest_common.sh@1529 -- # grep oacs 00:10:41.956 14:36:50 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:10:41.956 14:36:51 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:10:41.956 14:36:51 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:10:41.956 14:36:51 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:10:41.956 14:36:51 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:10:41.956 14:36:51 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:10:41.956 14:36:51 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:10:41.956 14:36:51 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:10:41.956 14:36:51 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:10:41.956 14:36:51 -- common/autotest_common.sh@1541 -- # continue 00:10:41.956 14:36:51 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:10:41.956 14:36:51 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:10:41.956 14:36:51 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:10:41.956 14:36:51 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:10:41.956 14:36:51 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:10:41.956 14:36:51 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:10:41.956 14:36:51 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:10:41.956 14:36:51 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:10:41.956 14:36:51 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:10:41.956 14:36:51 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:10:41.956 14:36:51 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:10:41.956 14:36:51 -- common/autotest_common.sh@1529 -- # grep oacs 00:10:41.956 14:36:51 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:10:41.956 14:36:51 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:10:41.956 14:36:51 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:10:41.956 14:36:51 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:10:41.956 14:36:51 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:10:41.956 14:36:51 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:10:41.956 14:36:51 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:10:41.956 14:36:51 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:10:41.956 14:36:51 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:10:41.956 14:36:51 -- common/autotest_common.sh@1541 -- # continue 00:10:41.956 14:36:51 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:10:41.956 14:36:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:41.956 14:36:51 -- common/autotest_common.sh@10 -- # set +x 00:10:41.956 14:36:51 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:10:41.956 14:36:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:41.956 14:36:51 -- common/autotest_common.sh@10 -- # set +x 00:10:41.956 14:36:51 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:42.564 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:42.564 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:42.564 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:42.821 14:36:51 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:10:42.821 14:36:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:42.821 14:36:51 -- common/autotest_common.sh@10 -- # set +x 00:10:42.821 14:36:51 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:10:42.821 14:36:51 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:10:42.821 14:36:51 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:10:42.821 14:36:51 -- common/autotest_common.sh@1561 -- # bdfs=() 00:10:42.821 14:36:51 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:10:42.821 14:36:51 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:10:42.821 14:36:51 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:10:42.821 14:36:51 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:10:42.821 14:36:51 -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:42.821 14:36:51 -- common/autotest_common.sh@1496 -- # local bdfs 00:10:42.821 14:36:51 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:42.821 14:36:51 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:42.821 14:36:51 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:42.821 14:36:51 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:10:42.821 14:36:51 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:10:42.821 14:36:51 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:10:42.821 14:36:51 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:10:42.821 14:36:51 -- common/autotest_common.sh@1564 -- # device=0x0010 00:10:42.821 14:36:51 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:42.821 14:36:51 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:10:42.821 14:36:51 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:10:42.821 14:36:51 -- common/autotest_common.sh@1564 -- # device=0x0010 00:10:42.821 14:36:51 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:42.821 14:36:51 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:10:42.821 14:36:51 -- common/autotest_common.sh@1570 -- # return 0 00:10:42.821 14:36:51 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:10:42.822 14:36:51 -- common/autotest_common.sh@1578 -- # return 0 00:10:42.822 14:36:51 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:10:42.822 14:36:51 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:10:42.822 14:36:51 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:42.822 14:36:51 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:42.822 14:36:51 -- spdk/autotest.sh@149 -- # timing_enter lib 00:10:42.822 14:36:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:42.822 14:36:51 -- common/autotest_common.sh@10 -- # set +x 00:10:42.822 14:36:51 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:10:42.822 14:36:51 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:10:42.822 14:36:51 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:10:42.822 14:36:51 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:42.822 14:36:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:42.822 14:36:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:42.822 14:36:51 -- common/autotest_common.sh@10 -- # set +x 00:10:42.822 ************************************ 00:10:42.822 START TEST env 00:10:42.822 ************************************ 00:10:42.822 14:36:51 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:42.822 * Looking for test storage... 00:10:42.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:10:42.822 14:36:51 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:42.822 14:36:51 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:42.822 14:36:51 env -- common/autotest_common.sh@1691 -- # lcov --version 00:10:43.080 14:36:51 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:43.080 14:36:51 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.080 14:36:51 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.080 14:36:51 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.080 14:36:51 env -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.080 14:36:51 env -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.080 14:36:51 env -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.080 14:36:51 env -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.080 14:36:51 env -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.080 14:36:51 env -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.080 14:36:51 env -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.080 14:36:51 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.080 14:36:51 env -- scripts/common.sh@344 -- # case "$op" in 00:10:43.080 14:36:51 env -- scripts/common.sh@345 -- # : 1 00:10:43.080 14:36:51 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.080 14:36:51 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.080 14:36:51 env -- scripts/common.sh@365 -- # decimal 1 00:10:43.080 14:36:51 env -- scripts/common.sh@353 -- # local d=1 00:10:43.080 14:36:51 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.080 14:36:51 env -- scripts/common.sh@355 -- # echo 1 00:10:43.080 14:36:51 env -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.080 14:36:51 env -- scripts/common.sh@366 -- # decimal 2 00:10:43.080 14:36:51 env -- scripts/common.sh@353 -- # local d=2 00:10:43.080 14:36:51 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.080 14:36:51 env -- scripts/common.sh@355 -- # echo 2 00:10:43.080 14:36:51 env -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.080 14:36:51 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.080 14:36:51 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.080 14:36:51 env -- scripts/common.sh@368 -- # return 0 00:10:43.080 14:36:51 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.080 14:36:51 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:43.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.080 --rc genhtml_branch_coverage=1 00:10:43.080 --rc genhtml_function_coverage=1 00:10:43.080 --rc genhtml_legend=1 00:10:43.080 --rc geninfo_all_blocks=1 00:10:43.080 --rc geninfo_unexecuted_blocks=1 00:10:43.080 00:10:43.080 ' 00:10:43.080 14:36:51 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:43.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.080 --rc genhtml_branch_coverage=1 00:10:43.080 --rc genhtml_function_coverage=1 00:10:43.080 --rc genhtml_legend=1 00:10:43.080 --rc geninfo_all_blocks=1 00:10:43.080 --rc geninfo_unexecuted_blocks=1 00:10:43.080 00:10:43.080 ' 00:10:43.080 14:36:51 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:43.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.080 --rc genhtml_branch_coverage=1 00:10:43.080 --rc genhtml_function_coverage=1 00:10:43.080 --rc genhtml_legend=1 00:10:43.080 --rc geninfo_all_blocks=1 00:10:43.080 --rc geninfo_unexecuted_blocks=1 00:10:43.080 00:10:43.080 ' 00:10:43.080 14:36:51 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:43.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.080 --rc genhtml_branch_coverage=1 00:10:43.080 --rc genhtml_function_coverage=1 00:10:43.080 --rc genhtml_legend=1 00:10:43.080 --rc geninfo_all_blocks=1 00:10:43.080 --rc geninfo_unexecuted_blocks=1 00:10:43.080 00:10:43.080 ' 00:10:43.080 14:36:51 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:43.080 14:36:51 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:43.080 14:36:51 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:43.080 14:36:51 env -- common/autotest_common.sh@10 -- # set +x 00:10:43.080 ************************************ 00:10:43.080 START TEST env_memory 00:10:43.080 ************************************ 00:10:43.080 14:36:51 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:43.080 00:10:43.080 00:10:43.080 CUnit - A unit testing framework for C - Version 2.1-3 00:10:43.080 http://cunit.sourceforge.net/ 00:10:43.080 00:10:43.080 00:10:43.080 Suite: memory 00:10:43.080 Test: alloc and free memory map ...[2024-11-04 14:36:52.021931] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:10:43.080 passed 00:10:43.080 Test: mem map translation ...[2024-11-04 14:36:52.039987] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:10:43.080 [2024-11-04 14:36:52.040031] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:10:43.080 [2024-11-04 14:36:52.040063] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:10:43.080 [2024-11-04 14:36:52.040069] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:10:43.080 passed 00:10:43.080 Test: mem map registration ...[2024-11-04 14:36:52.079038] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:10:43.080 [2024-11-04 14:36:52.079080] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:10:43.080 passed 00:10:43.080 Test: mem map adjacent registrations ...passed 00:10:43.080 00:10:43.080 Run Summary: Type Total Ran Passed Failed Inactive 00:10:43.080 suites 1 1 n/a 0 0 00:10:43.080 tests 4 4 4 0 0 00:10:43.080 asserts 152 152 152 0 n/a 00:10:43.080 00:10:43.080 Elapsed time = 0.129 seconds 00:10:43.080 00:10:43.080 real 0m0.143s 00:10:43.080 user 0m0.131s 00:10:43.080 sys 0m0.008s 00:10:43.080 14:36:52 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:43.080 14:36:52 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:10:43.080 ************************************ 00:10:43.080 END TEST env_memory 00:10:43.080 ************************************ 00:10:43.080 14:36:52 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:43.080 14:36:52 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:43.080 14:36:52 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:43.080 14:36:52 env -- common/autotest_common.sh@10 -- # set +x 00:10:43.080 ************************************ 00:10:43.080 START TEST env_vtophys 00:10:43.080 ************************************ 00:10:43.080 14:36:52 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:43.080 EAL: lib.eal log level changed from notice to debug 00:10:43.080 EAL: Detected lcore 0 as core 0 on socket 0 00:10:43.080 EAL: Detected lcore 1 as core 0 on socket 0 00:10:43.081 EAL: Detected lcore 2 as core 0 on socket 0 00:10:43.081 EAL: Detected lcore 3 as core 0 on socket 0 00:10:43.081 EAL: Detected lcore 4 as core 0 on socket 0 00:10:43.081 EAL: Detected lcore 5 as core 0 on socket 0 00:10:43.081 EAL: Detected lcore 6 as core 0 on socket 0 00:10:43.081 EAL: Detected lcore 7 as core 0 on socket 0 00:10:43.081 EAL: Detected lcore 8 as core 0 on socket 0 00:10:43.081 EAL: Detected lcore 9 as core 0 on socket 0 00:10:43.081 EAL: Maximum logical cores by configuration: 128 00:10:43.081 EAL: Detected CPU lcores: 10 00:10:43.081 EAL: Detected NUMA nodes: 1 00:10:43.081 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:10:43.081 EAL: Detected shared linkage of DPDK 00:10:43.081 EAL: No shared files mode enabled, IPC will be disabled 00:10:43.081 EAL: Selected IOVA mode 'PA' 00:10:43.081 EAL: Probing VFIO support... 00:10:43.081 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:43.081 EAL: VFIO modules not loaded, skipping VFIO support... 00:10:43.081 EAL: Ask a virtual area of 0x2e000 bytes 00:10:43.081 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:10:43.081 EAL: Setting up physically contiguous memory... 00:10:43.081 EAL: Setting maximum number of open files to 524288 00:10:43.081 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:10:43.081 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:10:43.081 EAL: Ask a virtual area of 0x61000 bytes 00:10:43.081 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:10:43.081 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:43.081 EAL: Ask a virtual area of 0x400000000 bytes 00:10:43.081 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:10:43.081 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:10:43.081 EAL: Ask a virtual area of 0x61000 bytes 00:10:43.081 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:10:43.081 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:43.081 EAL: Ask a virtual area of 0x400000000 bytes 00:10:43.081 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:10:43.081 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:10:43.081 EAL: Ask a virtual area of 0x61000 bytes 00:10:43.081 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:10:43.081 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:43.081 EAL: Ask a virtual area of 0x400000000 bytes 00:10:43.081 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:10:43.081 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:10:43.081 EAL: Ask a virtual area of 0x61000 bytes 00:10:43.081 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:10:43.081 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:43.081 EAL: Ask a virtual area of 0x400000000 bytes 00:10:43.081 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:10:43.081 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:10:43.081 EAL: Hugepages will be freed exactly as allocated. 00:10:43.081 EAL: No shared files mode enabled, IPC is disabled 00:10:43.081 EAL: No shared files mode enabled, IPC is disabled 00:10:43.338 EAL: TSC frequency is ~2600000 KHz 00:10:43.338 EAL: Main lcore 0 is ready (tid=7efdd8f3aa00;cpuset=[0]) 00:10:43.338 EAL: Trying to obtain current memory policy. 00:10:43.338 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:43.338 EAL: Restoring previous memory policy: 0 00:10:43.338 EAL: request: mp_malloc_sync 00:10:43.338 EAL: No shared files mode enabled, IPC is disabled 00:10:43.338 EAL: Heap on socket 0 was expanded by 2MB 00:10:43.338 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:43.338 EAL: No PCI address specified using 'addr=' in: bus=pci 00:10:43.338 EAL: Mem event callback 'spdk:(nil)' registered 00:10:43.338 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:10:43.338 00:10:43.338 00:10:43.338 CUnit - A unit testing framework for C - Version 2.1-3 00:10:43.338 http://cunit.sourceforge.net/ 00:10:43.338 00:10:43.338 00:10:43.338 Suite: components_suite 00:10:43.338 Test: vtophys_malloc_test ...passed 00:10:43.338 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:10:43.338 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:43.338 EAL: Restoring previous memory policy: 4 00:10:43.338 EAL: Calling mem event callback 'spdk:(nil)' 00:10:43.338 EAL: request: mp_malloc_sync 00:10:43.338 EAL: No shared files mode enabled, IPC is disabled 00:10:43.338 EAL: Heap on socket 0 was expanded by 4MB 00:10:43.338 EAL: Calling mem event callback 'spdk:(nil)' 00:10:43.338 EAL: request: mp_malloc_sync 00:10:43.338 EAL: No shared files mode enabled, IPC is disabled 00:10:43.338 EAL: Heap on socket 0 was shrunk by 4MB 00:10:43.338 EAL: Trying to obtain current memory policy. 00:10:43.338 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:43.338 EAL: Restoring previous memory policy: 4 00:10:43.338 EAL: Calling mem event callback 'spdk:(nil)' 00:10:43.338 EAL: request: mp_malloc_sync 00:10:43.338 EAL: No shared files mode enabled, IPC is disabled 00:10:43.338 EAL: Heap on socket 0 was expanded by 6MB 00:10:43.338 EAL: Calling mem event callback 'spdk:(nil)' 00:10:43.338 EAL: request: mp_malloc_sync 00:10:43.338 EAL: No shared files mode enabled, IPC is disabled 00:10:43.338 EAL: Heap on socket 0 was shrunk by 6MB 00:10:43.338 EAL: Trying to obtain current memory policy. 00:10:43.338 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:43.338 EAL: Restoring previous memory policy: 4 00:10:43.338 EAL: Calling mem event callback 'spdk:(nil)' 00:10:43.338 EAL: request: mp_malloc_sync 00:10:43.338 EAL: No shared files mode enabled, IPC is disabled 00:10:43.338 EAL: Heap on socket 0 was expanded by 10MB 00:10:43.338 EAL: Calling mem event callback 'spdk:(nil)' 00:10:43.338 EAL: request: mp_malloc_sync 00:10:43.338 EAL: No shared files mode enabled, IPC is disabled 00:10:43.338 EAL: Heap on socket 0 was shrunk by 10MB 00:10:43.338 EAL: Trying to obtain current memory policy. 00:10:43.338 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:43.338 EAL: Restoring previous memory policy: 4 00:10:43.338 EAL: Calling mem event callback 'spdk:(nil)' 00:10:43.338 EAL: request: mp_malloc_sync 00:10:43.338 EAL: No shared files mode enabled, IPC is disabled 00:10:43.338 EAL: Heap on socket 0 was expanded by 18MB 00:10:43.338 EAL: Calling mem event callback 'spdk:(nil)' 00:10:43.338 EAL: request: mp_malloc_sync 00:10:43.338 EAL: No shared files mode enabled, IPC is disabled 00:10:43.338 EAL: Heap on socket 0 was shrunk by 18MB 00:10:43.338 EAL: Trying to obtain current memory policy. 00:10:43.338 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:43.338 EAL: Restoring previous memory policy: 4 00:10:43.338 EAL: Calling mem event callback 'spdk:(nil)' 00:10:43.338 EAL: request: mp_malloc_sync 00:10:43.338 EAL: No shared files mode enabled, IPC is disabled 00:10:43.338 EAL: Heap on socket 0 was expanded by 34MB 00:10:43.338 EAL: Calling mem event callback 'spdk:(nil)' 00:10:43.338 EAL: request: mp_malloc_sync 00:10:43.338 EAL: No shared files mode enabled, IPC is disabled 00:10:43.339 EAL: Heap on socket 0 was shrunk by 34MB 00:10:43.339 EAL: Trying to obtain current memory policy. 00:10:43.339 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:43.339 EAL: Restoring previous memory policy: 4 00:10:43.339 EAL: Calling mem event callback 'spdk:(nil)' 00:10:43.339 EAL: request: mp_malloc_sync 00:10:43.339 EAL: No shared files mode enabled, IPC is disabled 00:10:43.339 EAL: Heap on socket 0 was expanded by 66MB 00:10:43.339 EAL: Calling mem event callback 'spdk:(nil)' 00:10:43.339 EAL: request: mp_malloc_sync 00:10:43.339 EAL: No shared files mode enabled, IPC is disabled 00:10:43.339 EAL: Heap on socket 0 was shrunk by 66MB 00:10:43.339 EAL: Trying to obtain current memory policy. 00:10:43.339 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:43.339 EAL: Restoring previous memory policy: 4 00:10:43.339 EAL: Calling mem event callback 'spdk:(nil)' 00:10:43.339 EAL: request: mp_malloc_sync 00:10:43.339 EAL: No shared files mode enabled, IPC is disabled 00:10:43.339 EAL: Heap on socket 0 was expanded by 130MB 00:10:43.339 EAL: Calling mem event callback 'spdk:(nil)' 00:10:43.339 EAL: request: mp_malloc_sync 00:10:43.339 EAL: No shared files mode enabled, IPC is disabled 00:10:43.339 EAL: Heap on socket 0 was shrunk by 130MB 00:10:43.339 EAL: Trying to obtain current memory policy. 00:10:43.339 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:43.339 EAL: Restoring previous memory policy: 4 00:10:43.339 EAL: Calling mem event callback 'spdk:(nil)' 00:10:43.339 EAL: request: mp_malloc_sync 00:10:43.339 EAL: No shared files mode enabled, IPC is disabled 00:10:43.339 EAL: Heap on socket 0 was expanded by 258MB 00:10:43.339 EAL: Calling mem event callback 'spdk:(nil)' 00:10:43.596 EAL: request: mp_malloc_sync 00:10:43.596 EAL: No shared files mode enabled, IPC is disabled 00:10:43.596 EAL: Heap on socket 0 was shrunk by 258MB 00:10:43.596 EAL: Trying to obtain current memory policy. 00:10:43.596 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:43.596 EAL: Restoring previous memory policy: 4 00:10:43.596 EAL: Calling mem event callback 'spdk:(nil)' 00:10:43.596 EAL: request: mp_malloc_sync 00:10:43.596 EAL: No shared files mode enabled, IPC is disabled 00:10:43.596 EAL: Heap on socket 0 was expanded by 514MB 00:10:43.596 EAL: Calling mem event callback 'spdk:(nil)' 00:10:43.596 EAL: request: mp_malloc_sync 00:10:43.596 EAL: No shared files mode enabled, IPC is disabled 00:10:43.596 EAL: Heap on socket 0 was shrunk by 514MB 00:10:43.596 EAL: Trying to obtain current memory policy. 00:10:43.596 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:43.854 EAL: Restoring previous memory policy: 4 00:10:43.854 EAL: Calling mem event callback 'spdk:(nil)' 00:10:43.854 EAL: request: mp_malloc_sync 00:10:43.854 EAL: No shared files mode enabled, IPC is disabled 00:10:43.854 EAL: Heap on socket 0 was expanded by 1026MB 00:10:43.854 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.112 passed 00:10:44.112 00:10:44.112 Run Summary: Type Total Ran Passed Failed Inactive 00:10:44.112 suites 1 1 n/a 0 0 00:10:44.112 tests 2 2 2 0 0 00:10:44.112 asserts 5680 5680 5680 0 n/a 00:10:44.112 00:10:44.112 Elapsed time = 0.684 seconds 00:10:44.112 EAL: request: mp_malloc_sync 00:10:44.112 EAL: No shared files mode enabled, IPC is disabled 00:10:44.112 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:44.112 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.112 EAL: request: mp_malloc_sync 00:10:44.112 EAL: No shared files mode enabled, IPC is disabled 00:10:44.112 EAL: Heap on socket 0 was shrunk by 2MB 00:10:44.112 EAL: No shared files mode enabled, IPC is disabled 00:10:44.112 EAL: No shared files mode enabled, IPC is disabled 00:10:44.112 EAL: No shared files mode enabled, IPC is disabled 00:10:44.112 00:10:44.112 real 0m0.867s 00:10:44.112 user 0m0.413s 00:10:44.112 sys 0m0.325s 00:10:44.112 14:36:53 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:44.112 14:36:53 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:10:44.112 ************************************ 00:10:44.112 END TEST env_vtophys 00:10:44.112 ************************************ 00:10:44.112 14:36:53 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:44.112 14:36:53 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:44.112 14:36:53 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:44.112 14:36:53 env -- common/autotest_common.sh@10 -- # set +x 00:10:44.112 ************************************ 00:10:44.112 START TEST env_pci 00:10:44.112 ************************************ 00:10:44.112 14:36:53 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:44.112 00:10:44.112 00:10:44.112 CUnit - A unit testing framework for C - Version 2.1-3 00:10:44.112 http://cunit.sourceforge.net/ 00:10:44.112 00:10:44.112 00:10:44.112 Suite: pci 00:10:44.112 Test: pci_hook ...[2024-11-04 14:36:53.081650] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56019 has claimed it 00:10:44.112 passed 00:10:44.112 00:10:44.112 Run Summary: Type Total Ran Passed Failed Inactive 00:10:44.112 suites 1 1 n/a 0 0 00:10:44.112 tests 1 1 1 0 0 00:10:44.112 asserts 25 25 25 0 n/a 00:10:44.112 00:10:44.112 Elapsed time = 0.001 seconds 00:10:44.112 EAL: Cannot find device (10000:00:01.0) 00:10:44.112 EAL: Failed to attach device on primary process 00:10:44.112 00:10:44.112 real 0m0.016s 00:10:44.112 user 0m0.007s 00:10:44.112 sys 0m0.009s 00:10:44.112 14:36:53 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:44.112 ************************************ 00:10:44.112 END TEST env_pci 00:10:44.112 ************************************ 00:10:44.112 14:36:53 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:10:44.112 14:36:53 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:10:44.112 14:36:53 env -- env/env.sh@15 -- # uname 00:10:44.112 14:36:53 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:10:44.112 14:36:53 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:10:44.112 14:36:53 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:44.112 14:36:53 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:44.112 14:36:53 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:44.112 14:36:53 env -- common/autotest_common.sh@10 -- # set +x 00:10:44.112 ************************************ 00:10:44.112 START TEST env_dpdk_post_init 00:10:44.112 ************************************ 00:10:44.112 14:36:53 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:44.112 EAL: Detected CPU lcores: 10 00:10:44.112 EAL: Detected NUMA nodes: 1 00:10:44.112 EAL: Detected shared linkage of DPDK 00:10:44.112 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:44.112 EAL: Selected IOVA mode 'PA' 00:10:44.370 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:44.371 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:10:44.371 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:10:44.371 Starting DPDK initialization... 00:10:44.371 Starting SPDK post initialization... 00:10:44.371 SPDK NVMe probe 00:10:44.371 Attaching to 0000:00:10.0 00:10:44.371 Attaching to 0000:00:11.0 00:10:44.371 Attached to 0000:00:10.0 00:10:44.371 Attached to 0000:00:11.0 00:10:44.371 Cleaning up... 00:10:44.371 00:10:44.371 real 0m0.179s 00:10:44.371 user 0m0.039s 00:10:44.371 sys 0m0.037s 00:10:44.371 14:36:53 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:44.371 14:36:53 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:10:44.371 ************************************ 00:10:44.371 END TEST env_dpdk_post_init 00:10:44.371 ************************************ 00:10:44.371 14:36:53 env -- env/env.sh@26 -- # uname 00:10:44.371 14:36:53 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:44.371 14:36:53 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:44.371 14:36:53 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:44.371 14:36:53 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:44.371 14:36:53 env -- common/autotest_common.sh@10 -- # set +x 00:10:44.371 ************************************ 00:10:44.371 START TEST env_mem_callbacks 00:10:44.371 ************************************ 00:10:44.371 14:36:53 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:44.371 EAL: Detected CPU lcores: 10 00:10:44.371 EAL: Detected NUMA nodes: 1 00:10:44.371 EAL: Detected shared linkage of DPDK 00:10:44.371 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:44.371 EAL: Selected IOVA mode 'PA' 00:10:44.371 00:10:44.371 00:10:44.371 CUnit - A unit testing framework for C - Version 2.1-3 00:10:44.371 http://cunit.sourceforge.net/ 00:10:44.371 00:10:44.371 00:10:44.371 Suite: memory 00:10:44.371 Test: test ... 00:10:44.371 register 0x200000200000 2097152 00:10:44.371 malloc 3145728 00:10:44.371 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:44.371 register 0x200000400000 4194304 00:10:44.371 buf 0x200000500000 len 3145728 PASSED 00:10:44.371 malloc 64 00:10:44.371 buf 0x2000004fff40 len 64 PASSED 00:10:44.371 malloc 4194304 00:10:44.371 register 0x200000800000 6291456 00:10:44.371 buf 0x200000a00000 len 4194304 PASSED 00:10:44.371 free 0x200000500000 3145728 00:10:44.371 free 0x2000004fff40 64 00:10:44.371 unregister 0x200000400000 4194304 PASSED 00:10:44.371 free 0x200000a00000 4194304 00:10:44.371 unregister 0x200000800000 6291456 PASSED 00:10:44.371 malloc 8388608 00:10:44.371 register 0x200000400000 10485760 00:10:44.371 buf 0x200000600000 len 8388608 PASSED 00:10:44.371 free 0x200000600000 8388608 00:10:44.371 unregister 0x200000400000 10485760 PASSED 00:10:44.371 passed 00:10:44.371 00:10:44.371 Run Summary: Type Total Ran Passed Failed Inactive 00:10:44.371 suites 1 1 n/a 0 0 00:10:44.371 tests 1 1 1 0 0 00:10:44.371 asserts 15 15 15 0 n/a 00:10:44.371 00:10:44.371 Elapsed time = 0.006 seconds 00:10:44.371 00:10:44.371 real 0m0.130s 00:10:44.371 user 0m0.012s 00:10:44.371 sys 0m0.016s 00:10:44.371 14:36:53 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:44.371 14:36:53 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:10:44.371 ************************************ 00:10:44.371 END TEST env_mem_callbacks 00:10:44.371 ************************************ 00:10:44.371 ************************************ 00:10:44.371 END TEST env 00:10:44.371 ************************************ 00:10:44.371 00:10:44.371 real 0m1.656s 00:10:44.371 user 0m0.734s 00:10:44.371 sys 0m0.578s 00:10:44.371 14:36:53 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:44.371 14:36:53 env -- common/autotest_common.sh@10 -- # set +x 00:10:44.630 14:36:53 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:44.630 14:36:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:44.630 14:36:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:44.630 14:36:53 -- common/autotest_common.sh@10 -- # set +x 00:10:44.630 ************************************ 00:10:44.630 START TEST rpc 00:10:44.630 ************************************ 00:10:44.630 14:36:53 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:44.630 * Looking for test storage... 00:10:44.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:44.631 14:36:53 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:44.631 14:36:53 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:44.631 14:36:53 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:44.631 14:36:53 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:44.631 14:36:53 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.631 14:36:53 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.631 14:36:53 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.631 14:36:53 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.631 14:36:53 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.631 14:36:53 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.631 14:36:53 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.631 14:36:53 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.631 14:36:53 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.631 14:36:53 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.631 14:36:53 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.631 14:36:53 rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:44.631 14:36:53 rpc -- scripts/common.sh@345 -- # : 1 00:10:44.631 14:36:53 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.631 14:36:53 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.631 14:36:53 rpc -- scripts/common.sh@365 -- # decimal 1 00:10:44.631 14:36:53 rpc -- scripts/common.sh@353 -- # local d=1 00:10:44.631 14:36:53 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.631 14:36:53 rpc -- scripts/common.sh@355 -- # echo 1 00:10:44.631 14:36:53 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.631 14:36:53 rpc -- scripts/common.sh@366 -- # decimal 2 00:10:44.631 14:36:53 rpc -- scripts/common.sh@353 -- # local d=2 00:10:44.631 14:36:53 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.631 14:36:53 rpc -- scripts/common.sh@355 -- # echo 2 00:10:44.631 14:36:53 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.631 14:36:53 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.631 14:36:53 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.631 14:36:53 rpc -- scripts/common.sh@368 -- # return 0 00:10:44.631 14:36:53 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.631 14:36:53 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:44.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.631 --rc genhtml_branch_coverage=1 00:10:44.631 --rc genhtml_function_coverage=1 00:10:44.631 --rc genhtml_legend=1 00:10:44.631 --rc geninfo_all_blocks=1 00:10:44.631 --rc geninfo_unexecuted_blocks=1 00:10:44.631 00:10:44.631 ' 00:10:44.631 14:36:53 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:44.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.631 --rc genhtml_branch_coverage=1 00:10:44.631 --rc genhtml_function_coverage=1 00:10:44.631 --rc genhtml_legend=1 00:10:44.631 --rc geninfo_all_blocks=1 00:10:44.631 --rc geninfo_unexecuted_blocks=1 00:10:44.631 00:10:44.631 ' 00:10:44.631 14:36:53 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:44.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.631 --rc genhtml_branch_coverage=1 00:10:44.631 --rc genhtml_function_coverage=1 00:10:44.631 --rc genhtml_legend=1 00:10:44.631 --rc geninfo_all_blocks=1 00:10:44.631 --rc geninfo_unexecuted_blocks=1 00:10:44.631 00:10:44.631 ' 00:10:44.631 14:36:53 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:44.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.631 --rc genhtml_branch_coverage=1 00:10:44.631 --rc genhtml_function_coverage=1 00:10:44.631 --rc genhtml_legend=1 00:10:44.631 --rc geninfo_all_blocks=1 00:10:44.631 --rc geninfo_unexecuted_blocks=1 00:10:44.631 00:10:44.631 ' 00:10:44.631 14:36:53 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56142 00:10:44.631 14:36:53 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:10:44.631 14:36:53 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:44.631 14:36:53 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56142 00:10:44.631 14:36:53 rpc -- common/autotest_common.sh@833 -- # '[' -z 56142 ']' 00:10:44.631 14:36:53 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.631 14:36:53 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:44.631 14:36:53 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.631 14:36:53 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:44.631 14:36:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.631 [2024-11-04 14:36:53.698236] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:10:44.631 [2024-11-04 14:36:53.698471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56142 ] 00:10:44.899 [2024-11-04 14:36:53.837655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.899 [2024-11-04 14:36:53.876117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:44.899 [2024-11-04 14:36:53.876346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56142' to capture a snapshot of events at runtime. 00:10:44.899 [2024-11-04 14:36:53.876410] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.899 [2024-11-04 14:36:53.876438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.899 [2024-11-04 14:36:53.876454] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56142 for offline analysis/debug. 00:10:44.899 [2024-11-04 14:36:53.876788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.899 [2024-11-04 14:36:53.927308] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:45.156 14:36:54 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:45.156 14:36:54 rpc -- common/autotest_common.sh@866 -- # return 0 00:10:45.156 14:36:54 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:45.156 14:36:54 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:45.156 14:36:54 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:10:45.156 14:36:54 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:10:45.157 14:36:54 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:45.157 14:36:54 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:45.157 14:36:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.157 ************************************ 00:10:45.157 START TEST rpc_integrity 00:10:45.157 ************************************ 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:45.157 { 00:10:45.157 "name": "Malloc0", 00:10:45.157 "aliases": [ 00:10:45.157 "9c9b48d6-1512-46e9-92b1-001351354987" 00:10:45.157 ], 00:10:45.157 "product_name": "Malloc disk", 00:10:45.157 "block_size": 512, 00:10:45.157 "num_blocks": 16384, 00:10:45.157 "uuid": "9c9b48d6-1512-46e9-92b1-001351354987", 00:10:45.157 "assigned_rate_limits": { 00:10:45.157 "rw_ios_per_sec": 0, 00:10:45.157 "rw_mbytes_per_sec": 0, 00:10:45.157 "r_mbytes_per_sec": 0, 00:10:45.157 "w_mbytes_per_sec": 0 00:10:45.157 }, 00:10:45.157 "claimed": false, 00:10:45.157 "zoned": false, 00:10:45.157 "supported_io_types": { 00:10:45.157 "read": true, 00:10:45.157 "write": true, 00:10:45.157 "unmap": true, 00:10:45.157 "flush": true, 00:10:45.157 "reset": true, 00:10:45.157 "nvme_admin": false, 00:10:45.157 "nvme_io": false, 00:10:45.157 "nvme_io_md": false, 00:10:45.157 "write_zeroes": true, 00:10:45.157 "zcopy": true, 00:10:45.157 "get_zone_info": false, 00:10:45.157 "zone_management": false, 00:10:45.157 "zone_append": false, 00:10:45.157 "compare": false, 00:10:45.157 "compare_and_write": false, 00:10:45.157 "abort": true, 00:10:45.157 "seek_hole": false, 00:10:45.157 "seek_data": false, 00:10:45.157 "copy": true, 00:10:45.157 "nvme_iov_md": false 00:10:45.157 }, 00:10:45.157 "memory_domains": [ 00:10:45.157 { 00:10:45.157 "dma_device_id": "system", 00:10:45.157 "dma_device_type": 1 00:10:45.157 }, 00:10:45.157 { 00:10:45.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.157 "dma_device_type": 2 00:10:45.157 } 00:10:45.157 ], 00:10:45.157 "driver_specific": {} 00:10:45.157 } 00:10:45.157 ]' 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.157 [2024-11-04 14:36:54.180523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:10:45.157 [2024-11-04 14:36:54.180727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.157 [2024-11-04 14:36:54.180773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x209bf10 00:10:45.157 [2024-11-04 14:36:54.180836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.157 [2024-11-04 14:36:54.182233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.157 [2024-11-04 14:36:54.182261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:45.157 Passthru0 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:45.157 { 00:10:45.157 "name": "Malloc0", 00:10:45.157 "aliases": [ 00:10:45.157 "9c9b48d6-1512-46e9-92b1-001351354987" 00:10:45.157 ], 00:10:45.157 "product_name": "Malloc disk", 00:10:45.157 "block_size": 512, 00:10:45.157 "num_blocks": 16384, 00:10:45.157 "uuid": "9c9b48d6-1512-46e9-92b1-001351354987", 00:10:45.157 "assigned_rate_limits": { 00:10:45.157 "rw_ios_per_sec": 0, 00:10:45.157 "rw_mbytes_per_sec": 0, 00:10:45.157 "r_mbytes_per_sec": 0, 00:10:45.157 "w_mbytes_per_sec": 0 00:10:45.157 }, 00:10:45.157 "claimed": true, 00:10:45.157 "claim_type": "exclusive_write", 00:10:45.157 "zoned": false, 00:10:45.157 "supported_io_types": { 00:10:45.157 "read": true, 00:10:45.157 "write": true, 00:10:45.157 "unmap": true, 00:10:45.157 "flush": true, 00:10:45.157 "reset": true, 00:10:45.157 "nvme_admin": false, 00:10:45.157 "nvme_io": false, 00:10:45.157 "nvme_io_md": false, 00:10:45.157 "write_zeroes": true, 00:10:45.157 "zcopy": true, 00:10:45.157 "get_zone_info": false, 00:10:45.157 "zone_management": false, 00:10:45.157 "zone_append": false, 00:10:45.157 "compare": false, 00:10:45.157 "compare_and_write": false, 00:10:45.157 "abort": true, 00:10:45.157 "seek_hole": false, 00:10:45.157 "seek_data": false, 00:10:45.157 "copy": true, 00:10:45.157 "nvme_iov_md": false 00:10:45.157 }, 00:10:45.157 "memory_domains": [ 00:10:45.157 { 00:10:45.157 "dma_device_id": "system", 00:10:45.157 "dma_device_type": 1 00:10:45.157 }, 00:10:45.157 { 00:10:45.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.157 "dma_device_type": 2 00:10:45.157 } 00:10:45.157 ], 00:10:45.157 "driver_specific": {} 00:10:45.157 }, 00:10:45.157 { 00:10:45.157 "name": "Passthru0", 00:10:45.157 "aliases": [ 00:10:45.157 "ad809526-abf3-534a-aee1-d4eff96c6683" 00:10:45.157 ], 00:10:45.157 "product_name": "passthru", 00:10:45.157 "block_size": 512, 00:10:45.157 "num_blocks": 16384, 00:10:45.157 "uuid": "ad809526-abf3-534a-aee1-d4eff96c6683", 00:10:45.157 "assigned_rate_limits": { 00:10:45.157 "rw_ios_per_sec": 0, 00:10:45.157 "rw_mbytes_per_sec": 0, 00:10:45.157 "r_mbytes_per_sec": 0, 00:10:45.157 "w_mbytes_per_sec": 0 00:10:45.157 }, 00:10:45.157 "claimed": false, 00:10:45.157 "zoned": false, 00:10:45.157 "supported_io_types": { 00:10:45.157 "read": true, 00:10:45.157 "write": true, 00:10:45.157 "unmap": true, 00:10:45.157 "flush": true, 00:10:45.157 "reset": true, 00:10:45.157 "nvme_admin": false, 00:10:45.157 "nvme_io": false, 00:10:45.157 "nvme_io_md": false, 00:10:45.157 "write_zeroes": true, 00:10:45.157 "zcopy": true, 00:10:45.157 "get_zone_info": false, 00:10:45.157 "zone_management": false, 00:10:45.157 "zone_append": false, 00:10:45.157 "compare": false, 00:10:45.157 "compare_and_write": false, 00:10:45.157 "abort": true, 00:10:45.157 "seek_hole": false, 00:10:45.157 "seek_data": false, 00:10:45.157 "copy": true, 00:10:45.157 "nvme_iov_md": false 00:10:45.157 }, 00:10:45.157 "memory_domains": [ 00:10:45.157 { 00:10:45.157 "dma_device_id": "system", 00:10:45.157 "dma_device_type": 1 00:10:45.157 }, 00:10:45.157 { 00:10:45.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.157 "dma_device_type": 2 00:10:45.157 } 00:10:45.157 ], 00:10:45.157 "driver_specific": { 00:10:45.157 "passthru": { 00:10:45.157 "name": "Passthru0", 00:10:45.157 "base_bdev_name": "Malloc0" 00:10:45.157 } 00:10:45.157 } 00:10:45.157 } 00:10:45.157 ]' 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.157 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:45.157 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:45.415 ************************************ 00:10:45.415 END TEST rpc_integrity 00:10:45.415 ************************************ 00:10:45.415 14:36:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:45.415 00:10:45.415 real 0m0.221s 00:10:45.415 user 0m0.123s 00:10:45.415 sys 0m0.033s 00:10:45.415 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:45.415 14:36:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.415 14:36:54 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:10:45.415 14:36:54 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:45.415 14:36:54 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:45.415 14:36:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.415 ************************************ 00:10:45.415 START TEST rpc_plugins 00:10:45.415 ************************************ 00:10:45.415 14:36:54 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:10:45.415 14:36:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:10:45.415 14:36:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.415 14:36:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:45.415 14:36:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.415 14:36:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:10:45.415 14:36:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:10:45.415 14:36:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.415 14:36:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:45.415 14:36:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.415 14:36:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:10:45.415 { 00:10:45.415 "name": "Malloc1", 00:10:45.415 "aliases": [ 00:10:45.415 "1d1ebbbb-6ec1-4444-bc56-a7d9129313ad" 00:10:45.415 ], 00:10:45.415 "product_name": "Malloc disk", 00:10:45.415 "block_size": 4096, 00:10:45.415 "num_blocks": 256, 00:10:45.415 "uuid": "1d1ebbbb-6ec1-4444-bc56-a7d9129313ad", 00:10:45.415 "assigned_rate_limits": { 00:10:45.415 "rw_ios_per_sec": 0, 00:10:45.415 "rw_mbytes_per_sec": 0, 00:10:45.415 "r_mbytes_per_sec": 0, 00:10:45.415 "w_mbytes_per_sec": 0 00:10:45.415 }, 00:10:45.415 "claimed": false, 00:10:45.415 "zoned": false, 00:10:45.415 "supported_io_types": { 00:10:45.415 "read": true, 00:10:45.415 "write": true, 00:10:45.415 "unmap": true, 00:10:45.415 "flush": true, 00:10:45.415 "reset": true, 00:10:45.415 "nvme_admin": false, 00:10:45.415 "nvme_io": false, 00:10:45.415 "nvme_io_md": false, 00:10:45.415 "write_zeroes": true, 00:10:45.415 "zcopy": true, 00:10:45.415 "get_zone_info": false, 00:10:45.415 "zone_management": false, 00:10:45.415 "zone_append": false, 00:10:45.415 "compare": false, 00:10:45.415 "compare_and_write": false, 00:10:45.415 "abort": true, 00:10:45.415 "seek_hole": false, 00:10:45.415 "seek_data": false, 00:10:45.415 "copy": true, 00:10:45.415 "nvme_iov_md": false 00:10:45.415 }, 00:10:45.415 "memory_domains": [ 00:10:45.415 { 00:10:45.415 "dma_device_id": "system", 00:10:45.415 "dma_device_type": 1 00:10:45.415 }, 00:10:45.415 { 00:10:45.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.415 "dma_device_type": 2 00:10:45.415 } 00:10:45.415 ], 00:10:45.415 "driver_specific": {} 00:10:45.415 } 00:10:45.415 ]' 00:10:45.415 14:36:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:10:45.415 14:36:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:10:45.415 14:36:54 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:10:45.415 14:36:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.415 14:36:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:45.415 14:36:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.415 14:36:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:10:45.415 14:36:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.415 14:36:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:45.415 14:36:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.415 14:36:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:10:45.415 14:36:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:10:45.415 ************************************ 00:10:45.415 END TEST rpc_plugins 00:10:45.415 ************************************ 00:10:45.415 14:36:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:10:45.415 00:10:45.415 real 0m0.120s 00:10:45.415 user 0m0.068s 00:10:45.415 sys 0m0.015s 00:10:45.415 14:36:54 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:45.415 14:36:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:45.415 14:36:54 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:10:45.415 14:36:54 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:45.416 14:36:54 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:45.416 14:36:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.416 ************************************ 00:10:45.416 START TEST rpc_trace_cmd_test 00:10:45.416 ************************************ 00:10:45.416 14:36:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:10:45.416 14:36:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:10:45.416 14:36:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:10:45.416 14:36:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.416 14:36:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.416 14:36:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.416 14:36:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:10:45.416 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56142", 00:10:45.416 "tpoint_group_mask": "0x8", 00:10:45.416 "iscsi_conn": { 00:10:45.416 "mask": "0x2", 00:10:45.416 "tpoint_mask": "0x0" 00:10:45.416 }, 00:10:45.416 "scsi": { 00:10:45.416 "mask": "0x4", 00:10:45.416 "tpoint_mask": "0x0" 00:10:45.416 }, 00:10:45.416 "bdev": { 00:10:45.416 "mask": "0x8", 00:10:45.416 "tpoint_mask": "0xffffffffffffffff" 00:10:45.416 }, 00:10:45.416 "nvmf_rdma": { 00:10:45.416 "mask": "0x10", 00:10:45.416 "tpoint_mask": "0x0" 00:10:45.416 }, 00:10:45.416 "nvmf_tcp": { 00:10:45.416 "mask": "0x20", 00:10:45.416 "tpoint_mask": "0x0" 00:10:45.416 }, 00:10:45.416 "ftl": { 00:10:45.416 "mask": "0x40", 00:10:45.416 "tpoint_mask": "0x0" 00:10:45.416 }, 00:10:45.416 "blobfs": { 00:10:45.416 "mask": "0x80", 00:10:45.416 "tpoint_mask": "0x0" 00:10:45.416 }, 00:10:45.416 "dsa": { 00:10:45.416 "mask": "0x200", 00:10:45.416 "tpoint_mask": "0x0" 00:10:45.416 }, 00:10:45.416 "thread": { 00:10:45.416 "mask": "0x400", 00:10:45.416 "tpoint_mask": "0x0" 00:10:45.416 }, 00:10:45.416 "nvme_pcie": { 00:10:45.416 "mask": "0x800", 00:10:45.416 "tpoint_mask": "0x0" 00:10:45.416 }, 00:10:45.416 "iaa": { 00:10:45.416 "mask": "0x1000", 00:10:45.416 "tpoint_mask": "0x0" 00:10:45.416 }, 00:10:45.416 "nvme_tcp": { 00:10:45.416 "mask": "0x2000", 00:10:45.416 "tpoint_mask": "0x0" 00:10:45.416 }, 00:10:45.416 "bdev_nvme": { 00:10:45.416 "mask": "0x4000", 00:10:45.416 "tpoint_mask": "0x0" 00:10:45.416 }, 00:10:45.416 "sock": { 00:10:45.416 "mask": "0x8000", 00:10:45.416 "tpoint_mask": "0x0" 00:10:45.416 }, 00:10:45.416 "blob": { 00:10:45.416 "mask": "0x10000", 00:10:45.416 "tpoint_mask": "0x0" 00:10:45.416 }, 00:10:45.416 "bdev_raid": { 00:10:45.416 "mask": "0x20000", 00:10:45.416 "tpoint_mask": "0x0" 00:10:45.416 }, 00:10:45.416 "scheduler": { 00:10:45.416 "mask": "0x40000", 00:10:45.416 "tpoint_mask": "0x0" 00:10:45.416 } 00:10:45.416 }' 00:10:45.416 14:36:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:10:45.416 14:36:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:10:45.416 14:36:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:10:45.673 14:36:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:10:45.673 14:36:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:10:45.673 14:36:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:10:45.673 14:36:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:10:45.673 14:36:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:10:45.673 14:36:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:10:45.673 ************************************ 00:10:45.673 END TEST rpc_trace_cmd_test 00:10:45.673 ************************************ 00:10:45.673 14:36:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:10:45.673 00:10:45.673 real 0m0.165s 00:10:45.673 user 0m0.137s 00:10:45.673 sys 0m0.020s 00:10:45.673 14:36:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:45.673 14:36:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.673 14:36:54 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:10:45.673 14:36:54 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:10:45.673 14:36:54 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:10:45.673 14:36:54 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:45.673 14:36:54 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:45.673 14:36:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.673 ************************************ 00:10:45.673 START TEST rpc_daemon_integrity 00:10:45.673 ************************************ 00:10:45.673 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:10:45.673 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:45.673 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.673 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.673 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.673 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:45.673 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:45.673 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:45.673 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:45.673 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.673 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.673 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.673 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:10:45.673 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:45.673 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.673 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.673 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.673 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:45.673 { 00:10:45.673 "name": "Malloc2", 00:10:45.673 "aliases": [ 00:10:45.673 "77c6038c-ea41-402a-99c0-e2035016d003" 00:10:45.673 ], 00:10:45.673 "product_name": "Malloc disk", 00:10:45.673 "block_size": 512, 00:10:45.673 "num_blocks": 16384, 00:10:45.673 "uuid": "77c6038c-ea41-402a-99c0-e2035016d003", 00:10:45.673 "assigned_rate_limits": { 00:10:45.673 "rw_ios_per_sec": 0, 00:10:45.673 "rw_mbytes_per_sec": 0, 00:10:45.673 "r_mbytes_per_sec": 0, 00:10:45.673 "w_mbytes_per_sec": 0 00:10:45.673 }, 00:10:45.673 "claimed": false, 00:10:45.673 "zoned": false, 00:10:45.673 "supported_io_types": { 00:10:45.673 "read": true, 00:10:45.673 "write": true, 00:10:45.673 "unmap": true, 00:10:45.673 "flush": true, 00:10:45.673 "reset": true, 00:10:45.673 "nvme_admin": false, 00:10:45.673 "nvme_io": false, 00:10:45.673 "nvme_io_md": false, 00:10:45.673 "write_zeroes": true, 00:10:45.673 "zcopy": true, 00:10:45.673 "get_zone_info": false, 00:10:45.674 "zone_management": false, 00:10:45.674 "zone_append": false, 00:10:45.674 "compare": false, 00:10:45.674 "compare_and_write": false, 00:10:45.674 "abort": true, 00:10:45.674 "seek_hole": false, 00:10:45.674 "seek_data": false, 00:10:45.674 "copy": true, 00:10:45.674 "nvme_iov_md": false 00:10:45.674 }, 00:10:45.674 "memory_domains": [ 00:10:45.674 { 00:10:45.674 "dma_device_id": "system", 00:10:45.674 "dma_device_type": 1 00:10:45.674 }, 00:10:45.674 { 00:10:45.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.674 "dma_device_type": 2 00:10:45.674 } 00:10:45.674 ], 00:10:45.674 "driver_specific": {} 00:10:45.674 } 00:10:45.674 ]' 00:10:45.674 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:45.674 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:45.674 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:10:45.674 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.674 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.674 [2024-11-04 14:36:54.812753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:10:45.674 [2024-11-04 14:36:54.812916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.674 [2024-11-04 14:36:54.812949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2236980 00:10:45.674 [2024-11-04 14:36:54.813037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.932 [2024-11-04 14:36:54.814475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.932 [2024-11-04 14:36:54.814578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:45.932 Passthru0 00:10:45.932 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.932 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:45.932 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.932 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.932 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.932 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:45.932 { 00:10:45.932 "name": "Malloc2", 00:10:45.932 "aliases": [ 00:10:45.932 "77c6038c-ea41-402a-99c0-e2035016d003" 00:10:45.932 ], 00:10:45.932 "product_name": "Malloc disk", 00:10:45.932 "block_size": 512, 00:10:45.932 "num_blocks": 16384, 00:10:45.932 "uuid": "77c6038c-ea41-402a-99c0-e2035016d003", 00:10:45.932 "assigned_rate_limits": { 00:10:45.932 "rw_ios_per_sec": 0, 00:10:45.932 "rw_mbytes_per_sec": 0, 00:10:45.932 "r_mbytes_per_sec": 0, 00:10:45.932 "w_mbytes_per_sec": 0 00:10:45.932 }, 00:10:45.932 "claimed": true, 00:10:45.932 "claim_type": "exclusive_write", 00:10:45.932 "zoned": false, 00:10:45.932 "supported_io_types": { 00:10:45.932 "read": true, 00:10:45.932 "write": true, 00:10:45.932 "unmap": true, 00:10:45.932 "flush": true, 00:10:45.932 "reset": true, 00:10:45.932 "nvme_admin": false, 00:10:45.932 "nvme_io": false, 00:10:45.932 "nvme_io_md": false, 00:10:45.932 "write_zeroes": true, 00:10:45.932 "zcopy": true, 00:10:45.932 "get_zone_info": false, 00:10:45.932 "zone_management": false, 00:10:45.932 "zone_append": false, 00:10:45.932 "compare": false, 00:10:45.932 "compare_and_write": false, 00:10:45.932 "abort": true, 00:10:45.932 "seek_hole": false, 00:10:45.932 "seek_data": false, 00:10:45.932 "copy": true, 00:10:45.932 "nvme_iov_md": false 00:10:45.932 }, 00:10:45.932 "memory_domains": [ 00:10:45.932 { 00:10:45.932 "dma_device_id": "system", 00:10:45.932 "dma_device_type": 1 00:10:45.932 }, 00:10:45.932 { 00:10:45.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.932 "dma_device_type": 2 00:10:45.932 } 00:10:45.932 ], 00:10:45.932 "driver_specific": {} 00:10:45.932 }, 00:10:45.932 { 00:10:45.932 "name": "Passthru0", 00:10:45.932 "aliases": [ 00:10:45.932 "767ad59a-d088-5f3a-b1f0-4ca65af12001" 00:10:45.932 ], 00:10:45.932 "product_name": "passthru", 00:10:45.932 "block_size": 512, 00:10:45.932 "num_blocks": 16384, 00:10:45.932 "uuid": "767ad59a-d088-5f3a-b1f0-4ca65af12001", 00:10:45.932 "assigned_rate_limits": { 00:10:45.932 "rw_ios_per_sec": 0, 00:10:45.932 "rw_mbytes_per_sec": 0, 00:10:45.932 "r_mbytes_per_sec": 0, 00:10:45.932 "w_mbytes_per_sec": 0 00:10:45.932 }, 00:10:45.932 "claimed": false, 00:10:45.932 "zoned": false, 00:10:45.932 "supported_io_types": { 00:10:45.932 "read": true, 00:10:45.932 "write": true, 00:10:45.932 "unmap": true, 00:10:45.932 "flush": true, 00:10:45.932 "reset": true, 00:10:45.932 "nvme_admin": false, 00:10:45.932 "nvme_io": false, 00:10:45.932 "nvme_io_md": false, 00:10:45.932 "write_zeroes": true, 00:10:45.932 "zcopy": true, 00:10:45.932 "get_zone_info": false, 00:10:45.932 "zone_management": false, 00:10:45.932 "zone_append": false, 00:10:45.932 "compare": false, 00:10:45.932 "compare_and_write": false, 00:10:45.932 "abort": true, 00:10:45.932 "seek_hole": false, 00:10:45.932 "seek_data": false, 00:10:45.932 "copy": true, 00:10:45.932 "nvme_iov_md": false 00:10:45.932 }, 00:10:45.932 "memory_domains": [ 00:10:45.932 { 00:10:45.932 "dma_device_id": "system", 00:10:45.932 "dma_device_type": 1 00:10:45.932 }, 00:10:45.932 { 00:10:45.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.932 "dma_device_type": 2 00:10:45.932 } 00:10:45.932 ], 00:10:45.932 "driver_specific": { 00:10:45.932 "passthru": { 00:10:45.932 "name": "Passthru0", 00:10:45.932 "base_bdev_name": "Malloc2" 00:10:45.932 } 00:10:45.932 } 00:10:45.932 } 00:10:45.932 ]' 00:10:45.932 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:45.932 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:45.933 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:45.933 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.933 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.933 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.933 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:45.933 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.933 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.933 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.933 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:45.933 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.933 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.933 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.933 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:45.933 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:45.933 ************************************ 00:10:45.933 END TEST rpc_daemon_integrity 00:10:45.933 ************************************ 00:10:45.933 14:36:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:45.933 00:10:45.933 real 0m0.227s 00:10:45.933 user 0m0.127s 00:10:45.933 sys 0m0.034s 00:10:45.933 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:45.933 14:36:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.933 14:36:54 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:45.933 14:36:54 rpc -- rpc/rpc.sh@84 -- # killprocess 56142 00:10:45.933 14:36:54 rpc -- common/autotest_common.sh@952 -- # '[' -z 56142 ']' 00:10:45.933 14:36:54 rpc -- common/autotest_common.sh@956 -- # kill -0 56142 00:10:45.933 14:36:54 rpc -- common/autotest_common.sh@957 -- # uname 00:10:45.933 14:36:54 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:45.933 14:36:54 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56142 00:10:45.933 killing process with pid 56142 00:10:45.933 14:36:54 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:45.933 14:36:54 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:45.933 14:36:54 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56142' 00:10:45.933 14:36:54 rpc -- common/autotest_common.sh@971 -- # kill 56142 00:10:45.933 14:36:54 rpc -- common/autotest_common.sh@976 -- # wait 56142 00:10:46.191 ************************************ 00:10:46.191 END TEST rpc 00:10:46.191 ************************************ 00:10:46.191 00:10:46.191 real 0m1.672s 00:10:46.191 user 0m2.113s 00:10:46.191 sys 0m0.463s 00:10:46.191 14:36:55 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:46.191 14:36:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.191 14:36:55 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:46.191 14:36:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:46.191 14:36:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:46.191 14:36:55 -- common/autotest_common.sh@10 -- # set +x 00:10:46.191 ************************************ 00:10:46.191 START TEST skip_rpc 00:10:46.191 ************************************ 00:10:46.191 14:36:55 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:46.191 * Looking for test storage... 00:10:46.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:46.191 14:36:55 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:46.191 14:36:55 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:46.191 14:36:55 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:46.449 14:36:55 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:46.449 14:36:55 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.449 14:36:55 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.449 14:36:55 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.449 14:36:55 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.449 14:36:55 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.449 14:36:55 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.449 14:36:55 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.449 14:36:55 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.449 14:36:55 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.450 14:36:55 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.450 14:36:55 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.450 14:36:55 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:46.450 14:36:55 skip_rpc -- scripts/common.sh@345 -- # : 1 00:10:46.450 14:36:55 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.450 14:36:55 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.450 14:36:55 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:46.450 14:36:55 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:10:46.450 14:36:55 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.450 14:36:55 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:10:46.450 14:36:55 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.450 14:36:55 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:46.450 14:36:55 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:10:46.450 14:36:55 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.450 14:36:55 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:10:46.450 14:36:55 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.450 14:36:55 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.450 14:36:55 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.450 14:36:55 skip_rpc -- scripts/common.sh@368 -- # return 0 00:10:46.450 14:36:55 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.450 14:36:55 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:46.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.450 --rc genhtml_branch_coverage=1 00:10:46.450 --rc genhtml_function_coverage=1 00:10:46.450 --rc genhtml_legend=1 00:10:46.450 --rc geninfo_all_blocks=1 00:10:46.450 --rc geninfo_unexecuted_blocks=1 00:10:46.450 00:10:46.450 ' 00:10:46.450 14:36:55 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:46.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.450 --rc genhtml_branch_coverage=1 00:10:46.450 --rc genhtml_function_coverage=1 00:10:46.450 --rc genhtml_legend=1 00:10:46.450 --rc geninfo_all_blocks=1 00:10:46.450 --rc geninfo_unexecuted_blocks=1 00:10:46.450 00:10:46.450 ' 00:10:46.450 14:36:55 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:46.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.450 --rc genhtml_branch_coverage=1 00:10:46.450 --rc genhtml_function_coverage=1 00:10:46.450 --rc genhtml_legend=1 00:10:46.450 --rc geninfo_all_blocks=1 00:10:46.450 --rc geninfo_unexecuted_blocks=1 00:10:46.450 00:10:46.450 ' 00:10:46.450 14:36:55 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:46.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.450 --rc genhtml_branch_coverage=1 00:10:46.450 --rc genhtml_function_coverage=1 00:10:46.450 --rc genhtml_legend=1 00:10:46.450 --rc geninfo_all_blocks=1 00:10:46.450 --rc geninfo_unexecuted_blocks=1 00:10:46.450 00:10:46.450 ' 00:10:46.450 14:36:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:46.450 14:36:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:46.450 14:36:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:10:46.450 14:36:55 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:46.450 14:36:55 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:46.450 14:36:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.450 ************************************ 00:10:46.450 START TEST skip_rpc 00:10:46.450 ************************************ 00:10:46.450 14:36:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:10:46.450 14:36:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56330 00:10:46.450 14:36:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:46.450 14:36:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:10:46.450 14:36:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:10:46.450 [2024-11-04 14:36:55.419207] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:10:46.450 [2024-11-04 14:36:55.419276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56330 ] 00:10:46.450 [2024-11-04 14:36:55.554599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.707 [2024-11-04 14:36:55.592308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.707 [2024-11-04 14:36:55.641408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56330 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 56330 ']' 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 56330 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56330 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56330' 00:10:52.052 killing process with pid 56330 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 56330 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 56330 00:10:52.052 00:10:52.052 real 0m5.253s 00:10:52.052 user 0m4.976s 00:10:52.052 sys 0m0.176s 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:52.052 ************************************ 00:10:52.052 END TEST skip_rpc 00:10:52.052 ************************************ 00:10:52.052 14:37:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.052 14:37:00 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:52.052 14:37:00 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:52.052 14:37:00 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:52.052 14:37:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.052 ************************************ 00:10:52.052 START TEST skip_rpc_with_json 00:10:52.052 ************************************ 00:10:52.052 14:37:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:10:52.052 14:37:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:52.052 14:37:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56415 00:10:52.052 14:37:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:52.052 14:37:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:52.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.052 14:37:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56415 00:10:52.052 14:37:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 56415 ']' 00:10:52.052 14:37:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.052 14:37:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:52.052 14:37:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.052 14:37:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:52.052 14:37:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:52.052 [2024-11-04 14:37:00.711385] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:10:52.052 [2024-11-04 14:37:00.711662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56415 ] 00:10:52.052 [2024-11-04 14:37:00.849341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.052 [2024-11-04 14:37:00.892474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.052 [2024-11-04 14:37:00.944173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:52.052 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:52.052 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:10:52.052 14:37:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:52.052 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.052 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:52.052 [2024-11-04 14:37:01.093147] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:52.052 request: 00:10:52.052 { 00:10:52.052 "trtype": "tcp", 00:10:52.052 "method": "nvmf_get_transports", 00:10:52.052 "req_id": 1 00:10:52.052 } 00:10:52.052 Got JSON-RPC error response 00:10:52.052 response: 00:10:52.052 { 00:10:52.052 "code": -19, 00:10:52.052 "message": "No such device" 00:10:52.052 } 00:10:52.052 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:52.052 14:37:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:52.052 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.052 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:52.052 [2024-11-04 14:37:01.101231] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.052 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.052 14:37:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:52.052 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.052 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:52.314 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.314 14:37:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:52.314 { 00:10:52.314 "subsystems": [ 00:10:52.314 { 00:10:52.314 "subsystem": "fsdev", 00:10:52.314 "config": [ 00:10:52.314 { 00:10:52.314 "method": "fsdev_set_opts", 00:10:52.314 "params": { 00:10:52.314 "fsdev_io_pool_size": 65535, 00:10:52.314 "fsdev_io_cache_size": 256 00:10:52.314 } 00:10:52.314 } 00:10:52.314 ] 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "subsystem": "keyring", 00:10:52.314 "config": [] 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "subsystem": "iobuf", 00:10:52.314 "config": [ 00:10:52.314 { 00:10:52.314 "method": "iobuf_set_options", 00:10:52.314 "params": { 00:10:52.314 "small_pool_count": 8192, 00:10:52.314 "large_pool_count": 1024, 00:10:52.314 "small_bufsize": 8192, 00:10:52.314 "large_bufsize": 135168, 00:10:52.314 "enable_numa": false 00:10:52.314 } 00:10:52.314 } 00:10:52.314 ] 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "subsystem": "sock", 00:10:52.314 "config": [ 00:10:52.314 { 00:10:52.314 "method": "sock_set_default_impl", 00:10:52.314 "params": { 00:10:52.314 "impl_name": "uring" 00:10:52.314 } 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "method": "sock_impl_set_options", 00:10:52.314 "params": { 00:10:52.314 "impl_name": "ssl", 00:10:52.314 "recv_buf_size": 4096, 00:10:52.314 "send_buf_size": 4096, 00:10:52.314 "enable_recv_pipe": true, 00:10:52.314 "enable_quickack": false, 00:10:52.314 "enable_placement_id": 0, 00:10:52.314 "enable_zerocopy_send_server": true, 00:10:52.314 "enable_zerocopy_send_client": false, 00:10:52.314 "zerocopy_threshold": 0, 00:10:52.314 "tls_version": 0, 00:10:52.314 "enable_ktls": false 00:10:52.314 } 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "method": "sock_impl_set_options", 00:10:52.314 "params": { 00:10:52.314 "impl_name": "posix", 00:10:52.314 "recv_buf_size": 2097152, 00:10:52.314 "send_buf_size": 2097152, 00:10:52.314 "enable_recv_pipe": true, 00:10:52.314 "enable_quickack": false, 00:10:52.314 "enable_placement_id": 0, 00:10:52.314 "enable_zerocopy_send_server": true, 00:10:52.314 "enable_zerocopy_send_client": false, 00:10:52.314 "zerocopy_threshold": 0, 00:10:52.314 "tls_version": 0, 00:10:52.314 "enable_ktls": false 00:10:52.314 } 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "method": "sock_impl_set_options", 00:10:52.314 "params": { 00:10:52.314 "impl_name": "uring", 00:10:52.314 "recv_buf_size": 2097152, 00:10:52.314 "send_buf_size": 2097152, 00:10:52.314 "enable_recv_pipe": true, 00:10:52.314 "enable_quickack": false, 00:10:52.314 "enable_placement_id": 0, 00:10:52.314 "enable_zerocopy_send_server": false, 00:10:52.314 "enable_zerocopy_send_client": false, 00:10:52.314 "zerocopy_threshold": 0, 00:10:52.314 "tls_version": 0, 00:10:52.314 "enable_ktls": false 00:10:52.314 } 00:10:52.314 } 00:10:52.314 ] 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "subsystem": "vmd", 00:10:52.314 "config": [] 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "subsystem": "accel", 00:10:52.314 "config": [ 00:10:52.314 { 00:10:52.314 "method": "accel_set_options", 00:10:52.314 "params": { 00:10:52.314 "small_cache_size": 128, 00:10:52.314 "large_cache_size": 16, 00:10:52.314 "task_count": 2048, 00:10:52.314 "sequence_count": 2048, 00:10:52.314 "buf_count": 2048 00:10:52.314 } 00:10:52.314 } 00:10:52.314 ] 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "subsystem": "bdev", 00:10:52.314 "config": [ 00:10:52.314 { 00:10:52.314 "method": "bdev_set_options", 00:10:52.314 "params": { 00:10:52.314 "bdev_io_pool_size": 65535, 00:10:52.314 "bdev_io_cache_size": 256, 00:10:52.314 "bdev_auto_examine": true, 00:10:52.314 "iobuf_small_cache_size": 128, 00:10:52.314 "iobuf_large_cache_size": 16 00:10:52.314 } 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "method": "bdev_raid_set_options", 00:10:52.314 "params": { 00:10:52.314 "process_window_size_kb": 1024, 00:10:52.314 "process_max_bandwidth_mb_sec": 0 00:10:52.314 } 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "method": "bdev_iscsi_set_options", 00:10:52.314 "params": { 00:10:52.314 "timeout_sec": 30 00:10:52.314 } 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "method": "bdev_nvme_set_options", 00:10:52.314 "params": { 00:10:52.314 "action_on_timeout": "none", 00:10:52.314 "timeout_us": 0, 00:10:52.314 "timeout_admin_us": 0, 00:10:52.314 "keep_alive_timeout_ms": 10000, 00:10:52.314 "arbitration_burst": 0, 00:10:52.314 "low_priority_weight": 0, 00:10:52.314 "medium_priority_weight": 0, 00:10:52.314 "high_priority_weight": 0, 00:10:52.314 "nvme_adminq_poll_period_us": 10000, 00:10:52.314 "nvme_ioq_poll_period_us": 0, 00:10:52.314 "io_queue_requests": 0, 00:10:52.314 "delay_cmd_submit": true, 00:10:52.314 "transport_retry_count": 4, 00:10:52.314 "bdev_retry_count": 3, 00:10:52.314 "transport_ack_timeout": 0, 00:10:52.314 "ctrlr_loss_timeout_sec": 0, 00:10:52.314 "reconnect_delay_sec": 0, 00:10:52.314 "fast_io_fail_timeout_sec": 0, 00:10:52.314 "disable_auto_failback": false, 00:10:52.314 "generate_uuids": false, 00:10:52.314 "transport_tos": 0, 00:10:52.314 "nvme_error_stat": false, 00:10:52.314 "rdma_srq_size": 0, 00:10:52.314 "io_path_stat": false, 00:10:52.314 "allow_accel_sequence": false, 00:10:52.314 "rdma_max_cq_size": 0, 00:10:52.314 "rdma_cm_event_timeout_ms": 0, 00:10:52.314 "dhchap_digests": [ 00:10:52.314 "sha256", 00:10:52.314 "sha384", 00:10:52.314 "sha512" 00:10:52.314 ], 00:10:52.314 "dhchap_dhgroups": [ 00:10:52.314 "null", 00:10:52.314 "ffdhe2048", 00:10:52.314 "ffdhe3072", 00:10:52.314 "ffdhe4096", 00:10:52.314 "ffdhe6144", 00:10:52.314 "ffdhe8192" 00:10:52.314 ] 00:10:52.314 } 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "method": "bdev_nvme_set_hotplug", 00:10:52.314 "params": { 00:10:52.314 "period_us": 100000, 00:10:52.314 "enable": false 00:10:52.314 } 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "method": "bdev_wait_for_examine" 00:10:52.314 } 00:10:52.314 ] 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "subsystem": "scsi", 00:10:52.314 "config": null 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "subsystem": "scheduler", 00:10:52.314 "config": [ 00:10:52.314 { 00:10:52.314 "method": "framework_set_scheduler", 00:10:52.314 "params": { 00:10:52.314 "name": "static" 00:10:52.314 } 00:10:52.314 } 00:10:52.314 ] 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "subsystem": "vhost_scsi", 00:10:52.314 "config": [] 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "subsystem": "vhost_blk", 00:10:52.314 "config": [] 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "subsystem": "ublk", 00:10:52.314 "config": [] 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "subsystem": "nbd", 00:10:52.314 "config": [] 00:10:52.314 }, 00:10:52.314 { 00:10:52.314 "subsystem": "nvmf", 00:10:52.314 "config": [ 00:10:52.314 { 00:10:52.314 "method": "nvmf_set_config", 00:10:52.314 "params": { 00:10:52.314 "discovery_filter": "match_any", 00:10:52.314 "admin_cmd_passthru": { 00:10:52.314 "identify_ctrlr": false 00:10:52.314 }, 00:10:52.314 "dhchap_digests": [ 00:10:52.314 "sha256", 00:10:52.314 "sha384", 00:10:52.314 "sha512" 00:10:52.314 ], 00:10:52.314 "dhchap_dhgroups": [ 00:10:52.314 "null", 00:10:52.314 "ffdhe2048", 00:10:52.314 "ffdhe3072", 00:10:52.314 "ffdhe4096", 00:10:52.314 "ffdhe6144", 00:10:52.314 "ffdhe8192" 00:10:52.314 ] 00:10:52.314 } 00:10:52.315 }, 00:10:52.315 { 00:10:52.315 "method": "nvmf_set_max_subsystems", 00:10:52.315 "params": { 00:10:52.315 "max_subsystems": 1024 00:10:52.315 } 00:10:52.315 }, 00:10:52.315 { 00:10:52.315 "method": "nvmf_set_crdt", 00:10:52.315 "params": { 00:10:52.315 "crdt1": 0, 00:10:52.315 "crdt2": 0, 00:10:52.315 "crdt3": 0 00:10:52.315 } 00:10:52.315 }, 00:10:52.315 { 00:10:52.315 "method": "nvmf_create_transport", 00:10:52.315 "params": { 00:10:52.315 "trtype": "TCP", 00:10:52.315 "max_queue_depth": 128, 00:10:52.315 "max_io_qpairs_per_ctrlr": 127, 00:10:52.315 "in_capsule_data_size": 4096, 00:10:52.315 "max_io_size": 131072, 00:10:52.315 "io_unit_size": 131072, 00:10:52.315 "max_aq_depth": 128, 00:10:52.315 "num_shared_buffers": 511, 00:10:52.315 "buf_cache_size": 4294967295, 00:10:52.315 "dif_insert_or_strip": false, 00:10:52.315 "zcopy": false, 00:10:52.315 "c2h_success": true, 00:10:52.315 "sock_priority": 0, 00:10:52.315 "abort_timeout_sec": 1, 00:10:52.315 "ack_timeout": 0, 00:10:52.315 "data_wr_pool_size": 0 00:10:52.315 } 00:10:52.315 } 00:10:52.315 ] 00:10:52.315 }, 00:10:52.315 { 00:10:52.315 "subsystem": "iscsi", 00:10:52.315 "config": [ 00:10:52.315 { 00:10:52.315 "method": "iscsi_set_options", 00:10:52.315 "params": { 00:10:52.315 "node_base": "iqn.2016-06.io.spdk", 00:10:52.315 "max_sessions": 128, 00:10:52.315 "max_connections_per_session": 2, 00:10:52.315 "max_queue_depth": 64, 00:10:52.315 "default_time2wait": 2, 00:10:52.315 "default_time2retain": 20, 00:10:52.315 "first_burst_length": 8192, 00:10:52.315 "immediate_data": true, 00:10:52.315 "allow_duplicated_isid": false, 00:10:52.315 "error_recovery_level": 0, 00:10:52.315 "nop_timeout": 60, 00:10:52.315 "nop_in_interval": 30, 00:10:52.315 "disable_chap": false, 00:10:52.315 "require_chap": false, 00:10:52.315 "mutual_chap": false, 00:10:52.315 "chap_group": 0, 00:10:52.315 "max_large_datain_per_connection": 64, 00:10:52.315 "max_r2t_per_connection": 4, 00:10:52.315 "pdu_pool_size": 36864, 00:10:52.315 "immediate_data_pool_size": 16384, 00:10:52.315 "data_out_pool_size": 2048 00:10:52.315 } 00:10:52.315 } 00:10:52.315 ] 00:10:52.315 } 00:10:52.315 ] 00:10:52.315 } 00:10:52.315 14:37:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:52.315 14:37:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56415 00:10:52.315 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 56415 ']' 00:10:52.315 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 56415 00:10:52.315 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:10:52.315 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:52.315 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56415 00:10:52.315 killing process with pid 56415 00:10:52.315 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:52.315 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:52.315 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56415' 00:10:52.315 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 56415 00:10:52.315 14:37:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 56415 00:10:52.573 14:37:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56431 00:10:52.573 14:37:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:52.573 14:37:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:57.837 14:37:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56431 00:10:57.837 14:37:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 56431 ']' 00:10:57.837 14:37:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 56431 00:10:57.837 14:37:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:10:57.837 14:37:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:57.837 14:37:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56431 00:10:57.837 killing process with pid 56431 00:10:57.837 14:37:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:57.837 14:37:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:57.837 14:37:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56431' 00:10:57.837 14:37:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 56431 00:10:57.837 14:37:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 56431 00:10:57.837 14:37:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:57.837 14:37:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:57.837 00:10:57.837 real 0m6.059s 00:10:57.837 user 0m5.725s 00:10:57.837 sys 0m0.410s 00:10:57.837 14:37:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:57.837 ************************************ 00:10:57.837 END TEST skip_rpc_with_json 00:10:57.837 ************************************ 00:10:57.837 14:37:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:57.837 14:37:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:57.837 14:37:06 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:57.837 14:37:06 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:57.838 14:37:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.838 ************************************ 00:10:57.838 START TEST skip_rpc_with_delay 00:10:57.838 ************************************ 00:10:57.838 14:37:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:10:57.838 14:37:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:57.838 14:37:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:10:57.838 14:37:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:57.838 14:37:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:57.838 14:37:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:57.838 14:37:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:57.838 14:37:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:57.838 14:37:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:57.838 14:37:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:57.838 14:37:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:57.838 14:37:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:57.838 14:37:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:57.838 [2024-11-04 14:37:06.805773] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:57.838 ************************************ 00:10:57.838 END TEST skip_rpc_with_delay 00:10:57.838 ************************************ 00:10:57.838 14:37:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:10:57.838 14:37:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:57.838 14:37:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:57.838 14:37:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:57.838 00:10:57.838 real 0m0.057s 00:10:57.838 user 0m0.035s 00:10:57.838 sys 0m0.022s 00:10:57.838 14:37:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:57.838 14:37:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:57.838 14:37:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:57.838 14:37:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:57.838 14:37:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:57.838 14:37:06 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:57.838 14:37:06 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:57.838 14:37:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.838 ************************************ 00:10:57.838 START TEST exit_on_failed_rpc_init 00:10:57.838 ************************************ 00:10:57.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.838 14:37:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:10:57.838 14:37:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=56540 00:10:57.838 14:37:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 56540 00:10:57.838 14:37:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 56540 ']' 00:10:57.838 14:37:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.838 14:37:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:57.838 14:37:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:57.838 14:37:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.838 14:37:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:57.838 14:37:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:57.838 [2024-11-04 14:37:06.892598] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:10:57.838 [2024-11-04 14:37:06.892696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56540 ] 00:10:58.096 [2024-11-04 14:37:07.025647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.096 [2024-11-04 14:37:07.060521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.096 [2024-11-04 14:37:07.108858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:58.660 14:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:58.660 14:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:10:58.660 14:37:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:58.660 14:37:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:58.660 14:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:10:58.661 14:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:58.661 14:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:58.661 14:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:58.661 14:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:58.661 14:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:58.661 14:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:58.661 14:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:58.661 14:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:58.661 14:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:58.661 14:37:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:58.929 [2024-11-04 14:37:07.837774] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:10:58.929 [2024-11-04 14:37:07.838016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56553 ] 00:10:58.929 [2024-11-04 14:37:07.980237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.929 [2024-11-04 14:37:08.017168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.929 [2024-11-04 14:37:08.017375] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:58.929 [2024-11-04 14:37:08.017649] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:58.929 [2024-11-04 14:37:08.017836] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:58.929 14:37:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:10:58.929 14:37:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:58.929 14:37:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:10:58.929 14:37:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:10:58.929 14:37:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:10:58.929 14:37:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:58.929 14:37:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:58.929 14:37:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 56540 00:10:58.929 14:37:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 56540 ']' 00:10:58.929 14:37:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 56540 00:10:58.929 14:37:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:10:58.929 14:37:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:58.929 14:37:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56540 00:10:59.187 14:37:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:59.187 14:37:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:59.187 killing process with pid 56540 00:10:59.187 14:37:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56540' 00:10:59.187 14:37:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 56540 00:10:59.187 14:37:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 56540 00:10:59.187 ************************************ 00:10:59.187 END TEST exit_on_failed_rpc_init 00:10:59.187 ************************************ 00:10:59.187 00:10:59.187 real 0m1.429s 00:10:59.187 user 0m1.669s 00:10:59.187 sys 0m0.254s 00:10:59.187 14:37:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:59.187 14:37:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:59.187 14:37:08 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:59.187 ************************************ 00:10:59.187 END TEST skip_rpc 00:10:59.187 ************************************ 00:10:59.187 00:10:59.187 real 0m13.081s 00:10:59.187 user 0m12.536s 00:10:59.187 sys 0m1.010s 00:10:59.187 14:37:08 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:59.187 14:37:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.445 14:37:08 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:59.445 14:37:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:59.445 14:37:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:59.445 14:37:08 -- common/autotest_common.sh@10 -- # set +x 00:10:59.445 ************************************ 00:10:59.445 START TEST rpc_client 00:10:59.445 ************************************ 00:10:59.445 14:37:08 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:59.445 * Looking for test storage... 00:10:59.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:10:59.445 14:37:08 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:59.445 14:37:08 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:10:59.445 14:37:08 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:59.445 14:37:08 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:59.445 14:37:08 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.445 14:37:08 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.445 14:37:08 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@345 -- # : 1 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@353 -- # local d=1 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@355 -- # echo 1 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@353 -- # local d=2 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@355 -- # echo 2 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.446 14:37:08 rpc_client -- scripts/common.sh@368 -- # return 0 00:10:59.446 14:37:08 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.446 14:37:08 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:59.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.446 --rc genhtml_branch_coverage=1 00:10:59.446 --rc genhtml_function_coverage=1 00:10:59.446 --rc genhtml_legend=1 00:10:59.446 --rc geninfo_all_blocks=1 00:10:59.446 --rc geninfo_unexecuted_blocks=1 00:10:59.446 00:10:59.446 ' 00:10:59.446 14:37:08 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:59.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.446 --rc genhtml_branch_coverage=1 00:10:59.446 --rc genhtml_function_coverage=1 00:10:59.446 --rc genhtml_legend=1 00:10:59.446 --rc geninfo_all_blocks=1 00:10:59.446 --rc geninfo_unexecuted_blocks=1 00:10:59.446 00:10:59.446 ' 00:10:59.446 14:37:08 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:59.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.446 --rc genhtml_branch_coverage=1 00:10:59.446 --rc genhtml_function_coverage=1 00:10:59.446 --rc genhtml_legend=1 00:10:59.446 --rc geninfo_all_blocks=1 00:10:59.446 --rc geninfo_unexecuted_blocks=1 00:10:59.446 00:10:59.446 ' 00:10:59.446 14:37:08 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:59.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.446 --rc genhtml_branch_coverage=1 00:10:59.446 --rc genhtml_function_coverage=1 00:10:59.446 --rc genhtml_legend=1 00:10:59.446 --rc geninfo_all_blocks=1 00:10:59.446 --rc geninfo_unexecuted_blocks=1 00:10:59.446 00:10:59.446 ' 00:10:59.446 14:37:08 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:10:59.446 OK 00:10:59.446 14:37:08 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:59.446 00:10:59.446 real 0m0.142s 00:10:59.446 user 0m0.094s 00:10:59.446 sys 0m0.055s 00:10:59.446 ************************************ 00:10:59.446 END TEST rpc_client 00:10:59.446 ************************************ 00:10:59.446 14:37:08 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:59.446 14:37:08 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:59.446 14:37:08 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:59.446 14:37:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:59.446 14:37:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:59.446 14:37:08 -- common/autotest_common.sh@10 -- # set +x 00:10:59.446 ************************************ 00:10:59.446 START TEST json_config 00:10:59.446 ************************************ 00:10:59.446 14:37:08 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:59.446 14:37:08 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:59.446 14:37:08 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:59.446 14:37:08 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:10:59.704 14:37:08 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:59.704 14:37:08 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.704 14:37:08 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.704 14:37:08 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.704 14:37:08 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.704 14:37:08 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.704 14:37:08 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.704 14:37:08 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.704 14:37:08 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.704 14:37:08 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.704 14:37:08 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.704 14:37:08 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.704 14:37:08 json_config -- scripts/common.sh@344 -- # case "$op" in 00:10:59.704 14:37:08 json_config -- scripts/common.sh@345 -- # : 1 00:10:59.704 14:37:08 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.704 14:37:08 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.704 14:37:08 json_config -- scripts/common.sh@365 -- # decimal 1 00:10:59.704 14:37:08 json_config -- scripts/common.sh@353 -- # local d=1 00:10:59.704 14:37:08 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.704 14:37:08 json_config -- scripts/common.sh@355 -- # echo 1 00:10:59.704 14:37:08 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.704 14:37:08 json_config -- scripts/common.sh@366 -- # decimal 2 00:10:59.704 14:37:08 json_config -- scripts/common.sh@353 -- # local d=2 00:10:59.704 14:37:08 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.704 14:37:08 json_config -- scripts/common.sh@355 -- # echo 2 00:10:59.704 14:37:08 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.704 14:37:08 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.704 14:37:08 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.704 14:37:08 json_config -- scripts/common.sh@368 -- # return 0 00:10:59.704 14:37:08 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.704 14:37:08 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:59.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.704 --rc genhtml_branch_coverage=1 00:10:59.704 --rc genhtml_function_coverage=1 00:10:59.704 --rc genhtml_legend=1 00:10:59.704 --rc geninfo_all_blocks=1 00:10:59.704 --rc geninfo_unexecuted_blocks=1 00:10:59.704 00:10:59.704 ' 00:10:59.704 14:37:08 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:59.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.704 --rc genhtml_branch_coverage=1 00:10:59.704 --rc genhtml_function_coverage=1 00:10:59.704 --rc genhtml_legend=1 00:10:59.704 --rc geninfo_all_blocks=1 00:10:59.704 --rc geninfo_unexecuted_blocks=1 00:10:59.704 00:10:59.704 ' 00:10:59.704 14:37:08 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:59.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.704 --rc genhtml_branch_coverage=1 00:10:59.704 --rc genhtml_function_coverage=1 00:10:59.704 --rc genhtml_legend=1 00:10:59.704 --rc geninfo_all_blocks=1 00:10:59.704 --rc geninfo_unexecuted_blocks=1 00:10:59.704 00:10:59.704 ' 00:10:59.704 14:37:08 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:59.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.704 --rc genhtml_branch_coverage=1 00:10:59.704 --rc genhtml_function_coverage=1 00:10:59.704 --rc genhtml_legend=1 00:10:59.704 --rc geninfo_all_blocks=1 00:10:59.704 --rc geninfo_unexecuted_blocks=1 00:10:59.704 00:10:59.704 ' 00:10:59.704 14:37:08 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:59.704 14:37:08 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.704 14:37:08 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.704 14:37:08 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.704 14:37:08 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.704 14:37:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.704 14:37:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.704 14:37:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.704 14:37:08 json_config -- paths/export.sh@5 -- # export PATH 00:10:59.704 14:37:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@51 -- # : 0 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.704 14:37:08 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:59.705 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:59.705 14:37:08 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:59.705 14:37:08 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:59.705 14:37:08 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:10:59.705 INFO: JSON configuration test init 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:10:59.705 14:37:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:59.705 14:37:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:10:59.705 14:37:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:59.705 14:37:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:59.705 14:37:08 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:10:59.705 14:37:08 json_config -- json_config/common.sh@9 -- # local app=target 00:10:59.705 14:37:08 json_config -- json_config/common.sh@10 -- # shift 00:10:59.705 14:37:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:59.705 14:37:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:59.705 14:37:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:59.705 14:37:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:59.705 14:37:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:59.705 14:37:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=56687 00:10:59.705 14:37:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:59.705 Waiting for target to run... 00:10:59.705 14:37:08 json_config -- json_config/common.sh@25 -- # waitforlisten 56687 /var/tmp/spdk_tgt.sock 00:10:59.705 14:37:08 json_config -- common/autotest_common.sh@833 -- # '[' -z 56687 ']' 00:10:59.705 14:37:08 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:59.705 14:37:08 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:59.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:59.705 14:37:08 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:10:59.705 14:37:08 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:59.705 14:37:08 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:59.705 14:37:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:59.705 [2024-11-04 14:37:08.721318] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:10:59.705 [2024-11-04 14:37:08.721531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56687 ] 00:10:59.962 [2024-11-04 14:37:09.021981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.962 [2024-11-04 14:37:09.047082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.526 00:11:00.526 14:37:09 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:00.526 14:37:09 json_config -- common/autotest_common.sh@866 -- # return 0 00:11:00.526 14:37:09 json_config -- json_config/common.sh@26 -- # echo '' 00:11:00.526 14:37:09 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:11:00.527 14:37:09 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:11:00.527 14:37:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:00.527 14:37:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:00.527 14:37:09 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:11:00.527 14:37:09 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:11:00.527 14:37:09 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:00.527 14:37:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:00.783 14:37:09 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:11:00.783 14:37:09 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:11:00.783 14:37:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:11:00.783 [2024-11-04 14:37:09.916873] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:01.041 14:37:10 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:11:01.041 14:37:10 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:11:01.041 14:37:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:01.041 14:37:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:01.041 14:37:10 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:11:01.041 14:37:10 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:11:01.041 14:37:10 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:11:01.041 14:37:10 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:11:01.041 14:37:10 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:11:01.041 14:37:10 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:11:01.041 14:37:10 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:11:01.041 14:37:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@51 -- # local get_types 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@54 -- # sort 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:11:01.405 14:37:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:01.405 14:37:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@62 -- # return 0 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:11:01.405 14:37:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:01.405 14:37:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:11:01.405 14:37:10 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:11:01.405 14:37:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:11:01.668 MallocForNvmf0 00:11:01.668 14:37:10 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:11:01.668 14:37:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:11:01.668 MallocForNvmf1 00:11:01.668 14:37:10 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:11:01.668 14:37:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:11:01.926 [2024-11-04 14:37:10.950620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.926 14:37:10 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:01.926 14:37:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:02.184 14:37:11 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:11:02.184 14:37:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:11:02.443 14:37:11 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:11:02.443 14:37:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:11:02.702 14:37:11 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:11:02.702 14:37:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:11:02.702 [2024-11-04 14:37:11.774961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:11:02.702 14:37:11 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:11:02.702 14:37:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:02.702 14:37:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:02.702 14:37:11 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:11:02.702 14:37:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:02.702 14:37:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:02.960 14:37:11 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:11:02.960 14:37:11 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:02.960 14:37:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:02.960 MallocBdevForConfigChangeCheck 00:11:02.960 14:37:12 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:11:02.960 14:37:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:02.960 14:37:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:03.218 14:37:12 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:11:03.218 14:37:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:03.476 INFO: shutting down applications... 00:11:03.476 14:37:12 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:11:03.476 14:37:12 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:11:03.476 14:37:12 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:11:03.476 14:37:12 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:11:03.476 14:37:12 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:11:03.734 Calling clear_iscsi_subsystem 00:11:03.734 Calling clear_nvmf_subsystem 00:11:03.734 Calling clear_nbd_subsystem 00:11:03.734 Calling clear_ublk_subsystem 00:11:03.734 Calling clear_vhost_blk_subsystem 00:11:03.734 Calling clear_vhost_scsi_subsystem 00:11:03.734 Calling clear_bdev_subsystem 00:11:03.735 14:37:12 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:11:03.735 14:37:12 json_config -- json_config/json_config.sh@350 -- # count=100 00:11:03.735 14:37:12 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:11:03.735 14:37:12 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:11:03.735 14:37:12 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:03.735 14:37:12 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:11:03.993 14:37:13 json_config -- json_config/json_config.sh@352 -- # break 00:11:03.993 14:37:13 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:11:03.993 14:37:13 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:11:03.993 14:37:13 json_config -- json_config/common.sh@31 -- # local app=target 00:11:03.993 14:37:13 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:03.993 14:37:13 json_config -- json_config/common.sh@35 -- # [[ -n 56687 ]] 00:11:03.993 14:37:13 json_config -- json_config/common.sh@38 -- # kill -SIGINT 56687 00:11:03.993 14:37:13 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:03.993 14:37:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:03.993 14:37:13 json_config -- json_config/common.sh@41 -- # kill -0 56687 00:11:03.993 14:37:13 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:11:04.560 14:37:13 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:11:04.560 14:37:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:04.560 14:37:13 json_config -- json_config/common.sh@41 -- # kill -0 56687 00:11:04.560 14:37:13 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:04.560 SPDK target shutdown done 00:11:04.560 INFO: relaunching applications... 00:11:04.560 14:37:13 json_config -- json_config/common.sh@43 -- # break 00:11:04.560 14:37:13 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:04.560 14:37:13 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:04.560 14:37:13 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:11:04.560 14:37:13 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:04.560 14:37:13 json_config -- json_config/common.sh@9 -- # local app=target 00:11:04.560 14:37:13 json_config -- json_config/common.sh@10 -- # shift 00:11:04.560 14:37:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:04.560 14:37:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:04.560 14:37:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:11:04.560 14:37:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:04.560 14:37:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:04.560 Waiting for target to run... 00:11:04.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:04.560 14:37:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=56877 00:11:04.560 14:37:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:04.560 14:37:13 json_config -- json_config/common.sh@25 -- # waitforlisten 56877 /var/tmp/spdk_tgt.sock 00:11:04.560 14:37:13 json_config -- common/autotest_common.sh@833 -- # '[' -z 56877 ']' 00:11:04.560 14:37:13 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:04.560 14:37:13 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:04.560 14:37:13 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:04.560 14:37:13 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:04.560 14:37:13 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:04.560 14:37:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:04.560 [2024-11-04 14:37:13.662582] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:04.560 [2024-11-04 14:37:13.662841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56877 ] 00:11:05.126 [2024-11-04 14:37:13.965838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.126 [2024-11-04 14:37:13.998425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.126 [2024-11-04 14:37:14.137178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:05.384 [2024-11-04 14:37:14.352661] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.384 [2024-11-04 14:37:14.384746] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:11:05.650 00:11:05.650 INFO: Checking if target configuration is the same... 00:11:05.651 14:37:14 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:05.651 14:37:14 json_config -- common/autotest_common.sh@866 -- # return 0 00:11:05.651 14:37:14 json_config -- json_config/common.sh@26 -- # echo '' 00:11:05.651 14:37:14 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:11:05.651 14:37:14 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:11:05.651 14:37:14 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:05.651 14:37:14 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:11:05.651 14:37:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:05.651 + '[' 2 -ne 2 ']' 00:11:05.651 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:11:05.651 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:11:05.651 + rootdir=/home/vagrant/spdk_repo/spdk 00:11:05.651 +++ basename /dev/fd/62 00:11:05.651 ++ mktemp /tmp/62.XXX 00:11:05.651 + tmp_file_1=/tmp/62.Qd4 00:11:05.651 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:05.651 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:11:05.651 + tmp_file_2=/tmp/spdk_tgt_config.json.sXj 00:11:05.651 + ret=0 00:11:05.651 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:05.909 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:06.168 + diff -u /tmp/62.Qd4 /tmp/spdk_tgt_config.json.sXj 00:11:06.168 INFO: JSON config files are the same 00:11:06.168 + echo 'INFO: JSON config files are the same' 00:11:06.168 + rm /tmp/62.Qd4 /tmp/spdk_tgt_config.json.sXj 00:11:06.168 + exit 0 00:11:06.168 INFO: changing configuration and checking if this can be detected... 00:11:06.168 14:37:15 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:11:06.168 14:37:15 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:11:06.168 14:37:15 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:11:06.168 14:37:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:11:06.168 14:37:15 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:06.168 14:37:15 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:11:06.168 14:37:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:06.168 + '[' 2 -ne 2 ']' 00:11:06.168 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:11:06.168 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:11:06.168 + rootdir=/home/vagrant/spdk_repo/spdk 00:11:06.168 +++ basename /dev/fd/62 00:11:06.168 ++ mktemp /tmp/62.XXX 00:11:06.168 + tmp_file_1=/tmp/62.pnN 00:11:06.168 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:06.168 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:11:06.168 + tmp_file_2=/tmp/spdk_tgt_config.json.NMA 00:11:06.168 + ret=0 00:11:06.168 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:06.734 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:06.734 + diff -u /tmp/62.pnN /tmp/spdk_tgt_config.json.NMA 00:11:06.734 + ret=1 00:11:06.734 + echo '=== Start of file: /tmp/62.pnN ===' 00:11:06.734 + cat /tmp/62.pnN 00:11:06.734 + echo '=== End of file: /tmp/62.pnN ===' 00:11:06.734 + echo '' 00:11:06.734 + echo '=== Start of file: /tmp/spdk_tgt_config.json.NMA ===' 00:11:06.734 + cat /tmp/spdk_tgt_config.json.NMA 00:11:06.734 + echo '=== End of file: /tmp/spdk_tgt_config.json.NMA ===' 00:11:06.734 + echo '' 00:11:06.734 + rm /tmp/62.pnN /tmp/spdk_tgt_config.json.NMA 00:11:06.734 + exit 1 00:11:06.734 INFO: configuration change detected. 00:11:06.734 14:37:15 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:11:06.734 14:37:15 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:11:06.734 14:37:15 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:11:06.734 14:37:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:06.734 14:37:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:06.734 14:37:15 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:11:06.734 14:37:15 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:11:06.734 14:37:15 json_config -- json_config/json_config.sh@324 -- # [[ -n 56877 ]] 00:11:06.734 14:37:15 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:11:06.734 14:37:15 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:11:06.734 14:37:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:06.734 14:37:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:06.734 14:37:15 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:11:06.734 14:37:15 json_config -- json_config/json_config.sh@200 -- # uname -s 00:11:06.734 14:37:15 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:11:06.734 14:37:15 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:11:06.734 14:37:15 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:11:06.734 14:37:15 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:11:06.734 14:37:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:06.734 14:37:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:06.734 14:37:15 json_config -- json_config/json_config.sh@330 -- # killprocess 56877 00:11:06.734 14:37:15 json_config -- common/autotest_common.sh@952 -- # '[' -z 56877 ']' 00:11:06.734 14:37:15 json_config -- common/autotest_common.sh@956 -- # kill -0 56877 00:11:06.734 14:37:15 json_config -- common/autotest_common.sh@957 -- # uname 00:11:06.734 14:37:15 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:06.734 14:37:15 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56877 00:11:06.734 killing process with pid 56877 00:11:06.734 14:37:15 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:06.734 14:37:15 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:06.734 14:37:15 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56877' 00:11:06.734 14:37:15 json_config -- common/autotest_common.sh@971 -- # kill 56877 00:11:06.734 14:37:15 json_config -- common/autotest_common.sh@976 -- # wait 56877 00:11:06.994 14:37:15 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:06.994 14:37:15 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:11:06.994 14:37:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:06.994 14:37:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:06.994 14:37:15 json_config -- json_config/json_config.sh@335 -- # return 0 00:11:06.994 14:37:15 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:11:06.994 INFO: Success 00:11:06.994 00:11:06.994 real 0m7.396s 00:11:06.994 user 0m10.397s 00:11:06.994 sys 0m1.235s 00:11:06.994 14:37:15 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:06.994 ************************************ 00:11:06.994 END TEST json_config 00:11:06.994 ************************************ 00:11:06.994 14:37:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:06.994 14:37:15 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:06.994 14:37:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:06.994 14:37:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:06.994 14:37:15 -- common/autotest_common.sh@10 -- # set +x 00:11:06.994 ************************************ 00:11:06.994 START TEST json_config_extra_key 00:11:06.994 ************************************ 00:11:06.994 14:37:15 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:06.994 14:37:16 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:06.994 14:37:16 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:11:06.994 14:37:16 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:06.994 14:37:16 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:06.994 14:37:16 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.994 14:37:16 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.994 14:37:16 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.994 14:37:16 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.994 14:37:16 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.994 14:37:16 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.994 14:37:16 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:11:06.995 14:37:16 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.995 14:37:16 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:06.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.995 --rc genhtml_branch_coverage=1 00:11:06.995 --rc genhtml_function_coverage=1 00:11:06.995 --rc genhtml_legend=1 00:11:06.995 --rc geninfo_all_blocks=1 00:11:06.995 --rc geninfo_unexecuted_blocks=1 00:11:06.995 00:11:06.995 ' 00:11:06.995 14:37:16 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:06.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.995 --rc genhtml_branch_coverage=1 00:11:06.995 --rc genhtml_function_coverage=1 00:11:06.995 --rc genhtml_legend=1 00:11:06.995 --rc geninfo_all_blocks=1 00:11:06.995 --rc geninfo_unexecuted_blocks=1 00:11:06.995 00:11:06.995 ' 00:11:06.995 14:37:16 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:06.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.995 --rc genhtml_branch_coverage=1 00:11:06.995 --rc genhtml_function_coverage=1 00:11:06.995 --rc genhtml_legend=1 00:11:06.995 --rc geninfo_all_blocks=1 00:11:06.995 --rc geninfo_unexecuted_blocks=1 00:11:06.995 00:11:06.995 ' 00:11:06.995 14:37:16 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:06.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.995 --rc genhtml_branch_coverage=1 00:11:06.995 --rc genhtml_function_coverage=1 00:11:06.995 --rc genhtml_legend=1 00:11:06.995 --rc geninfo_all_blocks=1 00:11:06.995 --rc geninfo_unexecuted_blocks=1 00:11:06.995 00:11:06.995 ' 00:11:06.995 14:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.995 14:37:16 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.995 14:37:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.995 14:37:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.995 14:37:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.995 14:37:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:11:06.995 14:37:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:06.995 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:06.995 14:37:16 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:06.995 14:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:06.995 14:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:11:06.995 14:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:11:06.995 14:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:11:06.995 14:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:11:06.995 14:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:11:06.995 14:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:11:06.995 14:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:11:06.995 14:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:11:06.995 14:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:06.995 14:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:11:06.995 INFO: launching applications... 00:11:06.995 14:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:06.996 14:37:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:11:06.996 14:37:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:11:06.996 14:37:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:06.996 14:37:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:06.996 14:37:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:11:06.996 14:37:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:06.996 14:37:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:06.996 14:37:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57024 00:11:06.996 14:37:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:06.996 Waiting for target to run... 00:11:06.996 14:37:16 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:06.996 14:37:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57024 /var/tmp/spdk_tgt.sock 00:11:06.996 14:37:16 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57024 ']' 00:11:06.996 14:37:16 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:06.996 14:37:16 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:06.996 14:37:16 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:06.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:06.996 14:37:16 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:06.996 14:37:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:07.254 [2024-11-04 14:37:16.148007] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:07.254 [2024-11-04 14:37:16.148588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57024 ] 00:11:07.513 [2024-11-04 14:37:16.505718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.513 [2024-11-04 14:37:16.535162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.513 [2024-11-04 14:37:16.566873] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:08.080 14:37:17 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:08.080 14:37:17 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:11:08.080 14:37:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:11:08.080 00:11:08.080 INFO: shutting down applications... 00:11:08.080 14:37:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:11:08.080 14:37:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:11:08.081 14:37:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:11:08.081 14:37:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:08.081 14:37:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57024 ]] 00:11:08.081 14:37:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57024 00:11:08.081 14:37:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:08.081 14:37:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:08.081 14:37:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57024 00:11:08.081 14:37:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:08.688 14:37:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:08.688 14:37:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:08.688 14:37:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57024 00:11:08.688 14:37:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:08.688 14:37:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:11:08.688 14:37:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:08.688 SPDK target shutdown done 00:11:08.688 Success 00:11:08.688 14:37:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:08.688 14:37:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:11:08.688 00:11:08.688 real 0m1.591s 00:11:08.688 user 0m1.281s 00:11:08.688 sys 0m0.338s 00:11:08.688 14:37:17 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:08.688 14:37:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:08.688 ************************************ 00:11:08.688 END TEST json_config_extra_key 00:11:08.688 ************************************ 00:11:08.688 14:37:17 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:08.688 14:37:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:08.688 14:37:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:08.688 14:37:17 -- common/autotest_common.sh@10 -- # set +x 00:11:08.688 ************************************ 00:11:08.688 START TEST alias_rpc 00:11:08.688 ************************************ 00:11:08.689 14:37:17 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:08.689 * Looking for test storage... 00:11:08.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:11:08.689 14:37:17 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:08.689 14:37:17 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:08.689 14:37:17 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:08.689 14:37:17 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@345 -- # : 1 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.689 14:37:17 alias_rpc -- scripts/common.sh@368 -- # return 0 00:11:08.689 14:37:17 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.689 14:37:17 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:08.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.689 --rc genhtml_branch_coverage=1 00:11:08.689 --rc genhtml_function_coverage=1 00:11:08.689 --rc genhtml_legend=1 00:11:08.689 --rc geninfo_all_blocks=1 00:11:08.689 --rc geninfo_unexecuted_blocks=1 00:11:08.689 00:11:08.689 ' 00:11:08.689 14:37:17 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:08.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.689 --rc genhtml_branch_coverage=1 00:11:08.689 --rc genhtml_function_coverage=1 00:11:08.689 --rc genhtml_legend=1 00:11:08.689 --rc geninfo_all_blocks=1 00:11:08.689 --rc geninfo_unexecuted_blocks=1 00:11:08.689 00:11:08.689 ' 00:11:08.689 14:37:17 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:08.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.689 --rc genhtml_branch_coverage=1 00:11:08.689 --rc genhtml_function_coverage=1 00:11:08.689 --rc genhtml_legend=1 00:11:08.689 --rc geninfo_all_blocks=1 00:11:08.689 --rc geninfo_unexecuted_blocks=1 00:11:08.689 00:11:08.689 ' 00:11:08.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.689 14:37:17 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:08.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.689 --rc genhtml_branch_coverage=1 00:11:08.689 --rc genhtml_function_coverage=1 00:11:08.689 --rc genhtml_legend=1 00:11:08.689 --rc geninfo_all_blocks=1 00:11:08.689 --rc geninfo_unexecuted_blocks=1 00:11:08.689 00:11:08.689 ' 00:11:08.689 14:37:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:08.689 14:37:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57098 00:11:08.689 14:37:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57098 00:11:08.689 14:37:17 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57098 ']' 00:11:08.689 14:37:17 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.689 14:37:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:08.689 14:37:17 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:08.689 14:37:17 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.689 14:37:17 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:08.689 14:37:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.689 [2024-11-04 14:37:17.778909] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:08.689 [2024-11-04 14:37:17.779486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57098 ] 00:11:08.948 [2024-11-04 14:37:17.926413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.948 [2024-11-04 14:37:17.964506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.948 [2024-11-04 14:37:18.013297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:09.207 14:37:18 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:09.207 14:37:18 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:09.208 14:37:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:11:09.466 14:37:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57098 00:11:09.466 14:37:18 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57098 ']' 00:11:09.466 14:37:18 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57098 00:11:09.466 14:37:18 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:11:09.466 14:37:18 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:09.466 14:37:18 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57098 00:11:09.466 killing process with pid 57098 00:11:09.466 14:37:18 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:09.466 14:37:18 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:09.466 14:37:18 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57098' 00:11:09.466 14:37:18 alias_rpc -- common/autotest_common.sh@971 -- # kill 57098 00:11:09.466 14:37:18 alias_rpc -- common/autotest_common.sh@976 -- # wait 57098 00:11:09.466 ************************************ 00:11:09.466 END TEST alias_rpc 00:11:09.466 ************************************ 00:11:09.466 00:11:09.466 real 0m1.004s 00:11:09.466 user 0m1.054s 00:11:09.466 sys 0m0.303s 00:11:09.466 14:37:18 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:09.466 14:37:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.724 14:37:18 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:11:09.724 14:37:18 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:09.724 14:37:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:09.724 14:37:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:09.724 14:37:18 -- common/autotest_common.sh@10 -- # set +x 00:11:09.724 ************************************ 00:11:09.724 START TEST spdkcli_tcp 00:11:09.724 ************************************ 00:11:09.724 14:37:18 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:09.724 * Looking for test storage... 00:11:09.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:11:09.724 14:37:18 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:09.724 14:37:18 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:09.724 14:37:18 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:11:09.724 14:37:18 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.724 14:37:18 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:11:09.724 14:37:18 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.724 14:37:18 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:09.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.724 --rc genhtml_branch_coverage=1 00:11:09.724 --rc genhtml_function_coverage=1 00:11:09.724 --rc genhtml_legend=1 00:11:09.724 --rc geninfo_all_blocks=1 00:11:09.724 --rc geninfo_unexecuted_blocks=1 00:11:09.724 00:11:09.724 ' 00:11:09.724 14:37:18 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:09.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.724 --rc genhtml_branch_coverage=1 00:11:09.724 --rc genhtml_function_coverage=1 00:11:09.724 --rc genhtml_legend=1 00:11:09.724 --rc geninfo_all_blocks=1 00:11:09.724 --rc geninfo_unexecuted_blocks=1 00:11:09.724 00:11:09.724 ' 00:11:09.724 14:37:18 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:09.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.724 --rc genhtml_branch_coverage=1 00:11:09.724 --rc genhtml_function_coverage=1 00:11:09.724 --rc genhtml_legend=1 00:11:09.724 --rc geninfo_all_blocks=1 00:11:09.724 --rc geninfo_unexecuted_blocks=1 00:11:09.724 00:11:09.724 ' 00:11:09.724 14:37:18 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:09.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.724 --rc genhtml_branch_coverage=1 00:11:09.724 --rc genhtml_function_coverage=1 00:11:09.724 --rc genhtml_legend=1 00:11:09.724 --rc geninfo_all_blocks=1 00:11:09.724 --rc geninfo_unexecuted_blocks=1 00:11:09.724 00:11:09.724 ' 00:11:09.724 14:37:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:11:09.724 14:37:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:11:09.724 14:37:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:11:09.724 14:37:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:11:09.724 14:37:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:11:09.724 14:37:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:09.724 14:37:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:11:09.724 14:37:18 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:09.724 14:37:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:09.724 14:37:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57169 00:11:09.724 14:37:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:11:09.724 14:37:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57169 00:11:09.724 14:37:18 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57169 ']' 00:11:09.725 14:37:18 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.725 14:37:18 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:09.725 14:37:18 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.725 14:37:18 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:09.725 14:37:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:09.725 [2024-11-04 14:37:18.832339] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:09.725 [2024-11-04 14:37:18.832944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57169 ] 00:11:09.982 [2024-11-04 14:37:18.971068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:09.982 [2024-11-04 14:37:19.010768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.982 [2024-11-04 14:37:19.010781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.982 [2024-11-04 14:37:19.061299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:10.917 14:37:19 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:10.917 14:37:19 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:11:10.917 14:37:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57186 00:11:10.917 14:37:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:11:10.917 14:37:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:11:10.917 [ 00:11:10.917 "bdev_malloc_delete", 00:11:10.917 "bdev_malloc_create", 00:11:10.917 "bdev_null_resize", 00:11:10.917 "bdev_null_delete", 00:11:10.917 "bdev_null_create", 00:11:10.917 "bdev_nvme_cuse_unregister", 00:11:10.917 "bdev_nvme_cuse_register", 00:11:10.917 "bdev_opal_new_user", 00:11:10.917 "bdev_opal_set_lock_state", 00:11:10.917 "bdev_opal_delete", 00:11:10.917 "bdev_opal_get_info", 00:11:10.917 "bdev_opal_create", 00:11:10.917 "bdev_nvme_opal_revert", 00:11:10.917 "bdev_nvme_opal_init", 00:11:10.917 "bdev_nvme_send_cmd", 00:11:10.917 "bdev_nvme_set_keys", 00:11:10.917 "bdev_nvme_get_path_iostat", 00:11:10.917 "bdev_nvme_get_mdns_discovery_info", 00:11:10.917 "bdev_nvme_stop_mdns_discovery", 00:11:10.917 "bdev_nvme_start_mdns_discovery", 00:11:10.917 "bdev_nvme_set_multipath_policy", 00:11:10.917 "bdev_nvme_set_preferred_path", 00:11:10.917 "bdev_nvme_get_io_paths", 00:11:10.917 "bdev_nvme_remove_error_injection", 00:11:10.917 "bdev_nvme_add_error_injection", 00:11:10.917 "bdev_nvme_get_discovery_info", 00:11:10.917 "bdev_nvme_stop_discovery", 00:11:10.917 "bdev_nvme_start_discovery", 00:11:10.917 "bdev_nvme_get_controller_health_info", 00:11:10.917 "bdev_nvme_disable_controller", 00:11:10.917 "bdev_nvme_enable_controller", 00:11:10.917 "bdev_nvme_reset_controller", 00:11:10.917 "bdev_nvme_get_transport_statistics", 00:11:10.917 "bdev_nvme_apply_firmware", 00:11:10.917 "bdev_nvme_detach_controller", 00:11:10.917 "bdev_nvme_get_controllers", 00:11:10.917 "bdev_nvme_attach_controller", 00:11:10.917 "bdev_nvme_set_hotplug", 00:11:10.917 "bdev_nvme_set_options", 00:11:10.917 "bdev_passthru_delete", 00:11:10.917 "bdev_passthru_create", 00:11:10.917 "bdev_lvol_set_parent_bdev", 00:11:10.917 "bdev_lvol_set_parent", 00:11:10.917 "bdev_lvol_check_shallow_copy", 00:11:10.917 "bdev_lvol_start_shallow_copy", 00:11:10.917 "bdev_lvol_grow_lvstore", 00:11:10.917 "bdev_lvol_get_lvols", 00:11:10.917 "bdev_lvol_get_lvstores", 00:11:10.917 "bdev_lvol_delete", 00:11:10.917 "bdev_lvol_set_read_only", 00:11:10.917 "bdev_lvol_resize", 00:11:10.917 "bdev_lvol_decouple_parent", 00:11:10.917 "bdev_lvol_inflate", 00:11:10.917 "bdev_lvol_rename", 00:11:10.917 "bdev_lvol_clone_bdev", 00:11:10.917 "bdev_lvol_clone", 00:11:10.917 "bdev_lvol_snapshot", 00:11:10.917 "bdev_lvol_create", 00:11:10.917 "bdev_lvol_delete_lvstore", 00:11:10.917 "bdev_lvol_rename_lvstore", 00:11:10.917 "bdev_lvol_create_lvstore", 00:11:10.917 "bdev_raid_set_options", 00:11:10.917 "bdev_raid_remove_base_bdev", 00:11:10.917 "bdev_raid_add_base_bdev", 00:11:10.917 "bdev_raid_delete", 00:11:10.917 "bdev_raid_create", 00:11:10.917 "bdev_raid_get_bdevs", 00:11:10.917 "bdev_error_inject_error", 00:11:10.917 "bdev_error_delete", 00:11:10.917 "bdev_error_create", 00:11:10.917 "bdev_split_delete", 00:11:10.917 "bdev_split_create", 00:11:10.917 "bdev_delay_delete", 00:11:10.917 "bdev_delay_create", 00:11:10.917 "bdev_delay_update_latency", 00:11:10.917 "bdev_zone_block_delete", 00:11:10.917 "bdev_zone_block_create", 00:11:10.917 "blobfs_create", 00:11:10.917 "blobfs_detect", 00:11:10.917 "blobfs_set_cache_size", 00:11:10.917 "bdev_aio_delete", 00:11:10.917 "bdev_aio_rescan", 00:11:10.917 "bdev_aio_create", 00:11:10.917 "bdev_ftl_set_property", 00:11:10.917 "bdev_ftl_get_properties", 00:11:10.917 "bdev_ftl_get_stats", 00:11:10.917 "bdev_ftl_unmap", 00:11:10.917 "bdev_ftl_unload", 00:11:10.917 "bdev_ftl_delete", 00:11:10.917 "bdev_ftl_load", 00:11:10.917 "bdev_ftl_create", 00:11:10.917 "bdev_virtio_attach_controller", 00:11:10.917 "bdev_virtio_scsi_get_devices", 00:11:10.917 "bdev_virtio_detach_controller", 00:11:10.917 "bdev_virtio_blk_set_hotplug", 00:11:10.917 "bdev_iscsi_delete", 00:11:10.917 "bdev_iscsi_create", 00:11:10.917 "bdev_iscsi_set_options", 00:11:10.917 "bdev_uring_delete", 00:11:10.917 "bdev_uring_rescan", 00:11:10.917 "bdev_uring_create", 00:11:10.917 "accel_error_inject_error", 00:11:10.917 "ioat_scan_accel_module", 00:11:10.917 "dsa_scan_accel_module", 00:11:10.917 "iaa_scan_accel_module", 00:11:10.917 "keyring_file_remove_key", 00:11:10.917 "keyring_file_add_key", 00:11:10.917 "keyring_linux_set_options", 00:11:10.917 "fsdev_aio_delete", 00:11:10.917 "fsdev_aio_create", 00:11:10.917 "iscsi_get_histogram", 00:11:10.917 "iscsi_enable_histogram", 00:11:10.917 "iscsi_set_options", 00:11:10.917 "iscsi_get_auth_groups", 00:11:10.917 "iscsi_auth_group_remove_secret", 00:11:10.917 "iscsi_auth_group_add_secret", 00:11:10.917 "iscsi_delete_auth_group", 00:11:10.917 "iscsi_create_auth_group", 00:11:10.917 "iscsi_set_discovery_auth", 00:11:10.917 "iscsi_get_options", 00:11:10.917 "iscsi_target_node_request_logout", 00:11:10.917 "iscsi_target_node_set_redirect", 00:11:10.917 "iscsi_target_node_set_auth", 00:11:10.917 "iscsi_target_node_add_lun", 00:11:10.917 "iscsi_get_stats", 00:11:10.917 "iscsi_get_connections", 00:11:10.917 "iscsi_portal_group_set_auth", 00:11:10.917 "iscsi_start_portal_group", 00:11:10.918 "iscsi_delete_portal_group", 00:11:10.918 "iscsi_create_portal_group", 00:11:10.918 "iscsi_get_portal_groups", 00:11:10.918 "iscsi_delete_target_node", 00:11:10.918 "iscsi_target_node_remove_pg_ig_maps", 00:11:10.918 "iscsi_target_node_add_pg_ig_maps", 00:11:10.918 "iscsi_create_target_node", 00:11:10.918 "iscsi_get_target_nodes", 00:11:10.918 "iscsi_delete_initiator_group", 00:11:10.918 "iscsi_initiator_group_remove_initiators", 00:11:10.918 "iscsi_initiator_group_add_initiators", 00:11:10.918 "iscsi_create_initiator_group", 00:11:10.918 "iscsi_get_initiator_groups", 00:11:10.918 "nvmf_set_crdt", 00:11:10.918 "nvmf_set_config", 00:11:10.918 "nvmf_set_max_subsystems", 00:11:10.918 "nvmf_stop_mdns_prr", 00:11:10.918 "nvmf_publish_mdns_prr", 00:11:10.918 "nvmf_subsystem_get_listeners", 00:11:10.918 "nvmf_subsystem_get_qpairs", 00:11:10.918 "nvmf_subsystem_get_controllers", 00:11:10.918 "nvmf_get_stats", 00:11:10.918 "nvmf_get_transports", 00:11:10.918 "nvmf_create_transport", 00:11:10.918 "nvmf_get_targets", 00:11:10.918 "nvmf_delete_target", 00:11:10.918 "nvmf_create_target", 00:11:10.918 "nvmf_subsystem_allow_any_host", 00:11:10.918 "nvmf_subsystem_set_keys", 00:11:10.918 "nvmf_subsystem_remove_host", 00:11:10.918 "nvmf_subsystem_add_host", 00:11:10.918 "nvmf_ns_remove_host", 00:11:10.918 "nvmf_ns_add_host", 00:11:10.918 "nvmf_subsystem_remove_ns", 00:11:10.918 "nvmf_subsystem_set_ns_ana_group", 00:11:10.918 "nvmf_subsystem_add_ns", 00:11:10.918 "nvmf_subsystem_listener_set_ana_state", 00:11:10.918 "nvmf_discovery_get_referrals", 00:11:10.918 "nvmf_discovery_remove_referral", 00:11:10.918 "nvmf_discovery_add_referral", 00:11:10.918 "nvmf_subsystem_remove_listener", 00:11:10.918 "nvmf_subsystem_add_listener", 00:11:10.918 "nvmf_delete_subsystem", 00:11:10.918 "nvmf_create_subsystem", 00:11:10.918 "nvmf_get_subsystems", 00:11:10.918 "env_dpdk_get_mem_stats", 00:11:10.918 "nbd_get_disks", 00:11:10.918 "nbd_stop_disk", 00:11:10.918 "nbd_start_disk", 00:11:10.918 "ublk_recover_disk", 00:11:10.918 "ublk_get_disks", 00:11:10.918 "ublk_stop_disk", 00:11:10.918 "ublk_start_disk", 00:11:10.918 "ublk_destroy_target", 00:11:10.918 "ublk_create_target", 00:11:10.918 "virtio_blk_create_transport", 00:11:10.918 "virtio_blk_get_transports", 00:11:10.918 "vhost_controller_set_coalescing", 00:11:10.918 "vhost_get_controllers", 00:11:10.918 "vhost_delete_controller", 00:11:10.918 "vhost_create_blk_controller", 00:11:10.918 "vhost_scsi_controller_remove_target", 00:11:10.918 "vhost_scsi_controller_add_target", 00:11:10.918 "vhost_start_scsi_controller", 00:11:10.918 "vhost_create_scsi_controller", 00:11:10.918 "thread_set_cpumask", 00:11:10.918 "scheduler_set_options", 00:11:10.918 "framework_get_governor", 00:11:10.918 "framework_get_scheduler", 00:11:10.918 "framework_set_scheduler", 00:11:10.918 "framework_get_reactors", 00:11:10.918 "thread_get_io_channels", 00:11:10.918 "thread_get_pollers", 00:11:10.918 "thread_get_stats", 00:11:10.918 "framework_monitor_context_switch", 00:11:10.918 "spdk_kill_instance", 00:11:10.918 "log_enable_timestamps", 00:11:10.918 "log_get_flags", 00:11:10.918 "log_clear_flag", 00:11:10.918 "log_set_flag", 00:11:10.918 "log_get_level", 00:11:10.918 "log_set_level", 00:11:10.918 "log_get_print_level", 00:11:10.918 "log_set_print_level", 00:11:10.918 "framework_enable_cpumask_locks", 00:11:10.918 "framework_disable_cpumask_locks", 00:11:10.918 "framework_wait_init", 00:11:10.918 "framework_start_init", 00:11:10.918 "scsi_get_devices", 00:11:10.918 "bdev_get_histogram", 00:11:10.918 "bdev_enable_histogram", 00:11:10.918 "bdev_set_qos_limit", 00:11:10.918 "bdev_set_qd_sampling_period", 00:11:10.918 "bdev_get_bdevs", 00:11:10.918 "bdev_reset_iostat", 00:11:10.918 "bdev_get_iostat", 00:11:10.918 "bdev_examine", 00:11:10.918 "bdev_wait_for_examine", 00:11:10.918 "bdev_set_options", 00:11:10.918 "accel_get_stats", 00:11:10.918 "accel_set_options", 00:11:10.918 "accel_set_driver", 00:11:10.918 "accel_crypto_key_destroy", 00:11:10.918 "accel_crypto_keys_get", 00:11:10.918 "accel_crypto_key_create", 00:11:10.918 "accel_assign_opc", 00:11:10.918 "accel_get_module_info", 00:11:10.918 "accel_get_opc_assignments", 00:11:10.918 "vmd_rescan", 00:11:10.918 "vmd_remove_device", 00:11:10.918 "vmd_enable", 00:11:10.918 "sock_get_default_impl", 00:11:10.918 "sock_set_default_impl", 00:11:10.918 "sock_impl_set_options", 00:11:10.918 "sock_impl_get_options", 00:11:10.918 "iobuf_get_stats", 00:11:10.918 "iobuf_set_options", 00:11:10.918 "keyring_get_keys", 00:11:10.918 "framework_get_pci_devices", 00:11:10.918 "framework_get_config", 00:11:10.918 "framework_get_subsystems", 00:11:10.918 "fsdev_set_opts", 00:11:10.918 "fsdev_get_opts", 00:11:10.918 "trace_get_info", 00:11:10.918 "trace_get_tpoint_group_mask", 00:11:10.918 "trace_disable_tpoint_group", 00:11:10.918 "trace_enable_tpoint_group", 00:11:10.918 "trace_clear_tpoint_mask", 00:11:10.918 "trace_set_tpoint_mask", 00:11:10.918 "notify_get_notifications", 00:11:10.918 "notify_get_types", 00:11:10.918 "spdk_get_version", 00:11:10.918 "rpc_get_methods" 00:11:10.918 ] 00:11:10.918 14:37:19 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:11:10.918 14:37:19 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:10.918 14:37:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:10.918 14:37:19 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:10.918 14:37:19 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57169 00:11:10.918 14:37:19 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57169 ']' 00:11:10.918 14:37:19 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57169 00:11:10.918 14:37:19 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:11:10.918 14:37:19 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:10.918 14:37:19 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57169 00:11:10.918 killing process with pid 57169 00:11:10.918 14:37:20 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:10.918 14:37:20 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:10.918 14:37:20 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57169' 00:11:10.918 14:37:20 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57169 00:11:10.918 14:37:20 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57169 00:11:11.177 ************************************ 00:11:11.177 END TEST spdkcli_tcp 00:11:11.177 ************************************ 00:11:11.177 00:11:11.177 real 0m1.578s 00:11:11.177 user 0m2.971s 00:11:11.177 sys 0m0.347s 00:11:11.177 14:37:20 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:11.177 14:37:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:11.177 14:37:20 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:11.177 14:37:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:11.177 14:37:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:11.177 14:37:20 -- common/autotest_common.sh@10 -- # set +x 00:11:11.177 ************************************ 00:11:11.177 START TEST dpdk_mem_utility 00:11:11.177 ************************************ 00:11:11.177 14:37:20 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:11.177 * Looking for test storage... 00:11:11.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:11:11.437 14:37:20 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:11.437 14:37:20 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:11:11.437 14:37:20 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:11.437 14:37:20 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:11:11.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.437 14:37:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:11:11.437 14:37:20 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.437 14:37:20 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:11.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.437 --rc genhtml_branch_coverage=1 00:11:11.437 --rc genhtml_function_coverage=1 00:11:11.437 --rc genhtml_legend=1 00:11:11.437 --rc geninfo_all_blocks=1 00:11:11.437 --rc geninfo_unexecuted_blocks=1 00:11:11.437 00:11:11.437 ' 00:11:11.437 14:37:20 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:11.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.437 --rc genhtml_branch_coverage=1 00:11:11.437 --rc genhtml_function_coverage=1 00:11:11.437 --rc genhtml_legend=1 00:11:11.437 --rc geninfo_all_blocks=1 00:11:11.437 --rc geninfo_unexecuted_blocks=1 00:11:11.437 00:11:11.437 ' 00:11:11.437 14:37:20 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:11.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.437 --rc genhtml_branch_coverage=1 00:11:11.437 --rc genhtml_function_coverage=1 00:11:11.437 --rc genhtml_legend=1 00:11:11.437 --rc geninfo_all_blocks=1 00:11:11.438 --rc geninfo_unexecuted_blocks=1 00:11:11.438 00:11:11.438 ' 00:11:11.438 14:37:20 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:11.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.438 --rc genhtml_branch_coverage=1 00:11:11.438 --rc genhtml_function_coverage=1 00:11:11.438 --rc genhtml_legend=1 00:11:11.438 --rc geninfo_all_blocks=1 00:11:11.438 --rc geninfo_unexecuted_blocks=1 00:11:11.438 00:11:11.438 ' 00:11:11.438 14:37:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:11.438 14:37:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57262 00:11:11.438 14:37:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57262 00:11:11.438 14:37:20 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 57262 ']' 00:11:11.438 14:37:20 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.438 14:37:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:11.438 14:37:20 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:11.438 14:37:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.438 14:37:20 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:11.438 14:37:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:11.438 [2024-11-04 14:37:20.447846] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:11.438 [2024-11-04 14:37:20.448075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57262 ] 00:11:11.697 [2024-11-04 14:37:20.585716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.697 [2024-11-04 14:37:20.622078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.697 [2024-11-04 14:37:20.666103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:12.263 14:37:21 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:12.263 14:37:21 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:11:12.263 14:37:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:11:12.263 14:37:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:11:12.263 14:37:21 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.263 14:37:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:12.263 { 00:11:12.263 "filename": "/tmp/spdk_mem_dump.txt" 00:11:12.263 } 00:11:12.263 14:37:21 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.263 14:37:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:12.263 DPDK memory size 810.000000 MiB in 1 heap(s) 00:11:12.263 1 heaps totaling size 810.000000 MiB 00:11:12.263 size: 810.000000 MiB heap id: 0 00:11:12.263 end heaps---------- 00:11:12.263 9 mempools totaling size 595.772034 MiB 00:11:12.263 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:11:12.263 size: 158.602051 MiB name: PDU_data_out_Pool 00:11:12.263 size: 92.545471 MiB name: bdev_io_57262 00:11:12.263 size: 50.003479 MiB name: msgpool_57262 00:11:12.263 size: 36.509338 MiB name: fsdev_io_57262 00:11:12.263 size: 21.763794 MiB name: PDU_Pool 00:11:12.263 size: 19.513306 MiB name: SCSI_TASK_Pool 00:11:12.263 size: 4.133484 MiB name: evtpool_57262 00:11:12.263 size: 0.026123 MiB name: Session_Pool 00:11:12.263 end mempools------- 00:11:12.263 6 memzones totaling size 4.142822 MiB 00:11:12.263 size: 1.000366 MiB name: RG_ring_0_57262 00:11:12.263 size: 1.000366 MiB name: RG_ring_1_57262 00:11:12.263 size: 1.000366 MiB name: RG_ring_4_57262 00:11:12.263 size: 1.000366 MiB name: RG_ring_5_57262 00:11:12.263 size: 0.125366 MiB name: RG_ring_2_57262 00:11:12.263 size: 0.015991 MiB name: RG_ring_3_57262 00:11:12.263 end memzones------- 00:11:12.263 14:37:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:11:12.580 heap id: 0 total size: 810.000000 MiB number of busy elements: 318 number of free elements: 15 00:11:12.580 list of free elements. size: 10.812317 MiB 00:11:12.580 element at address: 0x200018a00000 with size: 0.999878 MiB 00:11:12.580 element at address: 0x200018c00000 with size: 0.999878 MiB 00:11:12.580 element at address: 0x200031800000 with size: 0.994446 MiB 00:11:12.580 element at address: 0x200000400000 with size: 0.993958 MiB 00:11:12.580 element at address: 0x200006400000 with size: 0.959839 MiB 00:11:12.580 element at address: 0x200012c00000 with size: 0.954285 MiB 00:11:12.580 element at address: 0x200018e00000 with size: 0.936584 MiB 00:11:12.580 element at address: 0x200000200000 with size: 0.717346 MiB 00:11:12.580 element at address: 0x20001a600000 with size: 0.566589 MiB 00:11:12.580 element at address: 0x20000a600000 with size: 0.488892 MiB 00:11:12.580 element at address: 0x200000c00000 with size: 0.487000 MiB 00:11:12.580 element at address: 0x200019000000 with size: 0.485657 MiB 00:11:12.580 element at address: 0x200003e00000 with size: 0.480286 MiB 00:11:12.580 element at address: 0x200027a00000 with size: 0.395935 MiB 00:11:12.580 element at address: 0x200000800000 with size: 0.351746 MiB 00:11:12.580 list of standard malloc elements. size: 199.268799 MiB 00:11:12.580 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:11:12.580 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:11:12.580 element at address: 0x200018afff80 with size: 1.000122 MiB 00:11:12.580 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:11:12.580 element at address: 0x200018efff80 with size: 1.000122 MiB 00:11:12.580 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:11:12.580 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:11:12.580 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:11:12.580 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:11:12.580 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000085e580 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087e840 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087e900 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087f080 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087f140 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087f200 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087f380 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087f440 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087f500 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x20000087f680 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:11:12.580 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:11:12.580 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000cff000 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200003efb980 with size: 0.000183 MiB 00:11:12.581 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:11:12.581 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:11:12.581 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a6910c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a691180 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a691240 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a691300 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a691480 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a691540 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a691600 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a691780 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a691840 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a691900 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a692080 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a692140 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a692200 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a692380 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a692440 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a692500 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a692680 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a692740 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a692800 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a692980 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a693040 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a693100 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a693280 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a693340 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a693400 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a693580 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a693640 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a693700 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a693880 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a693940 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a694000 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a694180 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a694240 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a694300 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a694480 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a694540 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a694600 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a694780 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a694840 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a694900 with size: 0.000183 MiB 00:11:12.581 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:11:12.582 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:11:12.582 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:11:12.582 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:11:12.582 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:11:12.582 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:11:12.582 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x20001a695080 with size: 0.000183 MiB 00:11:12.582 element at address: 0x20001a695140 with size: 0.000183 MiB 00:11:12.582 element at address: 0x20001a695200 with size: 0.000183 MiB 00:11:12.582 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x20001a695380 with size: 0.000183 MiB 00:11:12.582 element at address: 0x20001a695440 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a65680 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6c280 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:11:12.582 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:11:12.582 list of memzone associated elements. size: 599.918884 MiB 00:11:12.582 element at address: 0x20001a695500 with size: 211.416748 MiB 00:11:12.582 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:11:12.582 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:11:12.582 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:11:12.582 element at address: 0x200012df4780 with size: 92.045044 MiB 00:11:12.582 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57262_0 00:11:12.582 element at address: 0x200000dff380 with size: 48.003052 MiB 00:11:12.582 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57262_0 00:11:12.582 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:11:12.582 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57262_0 00:11:12.582 element at address: 0x2000191be940 with size: 20.255554 MiB 00:11:12.582 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:11:12.582 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:11:12.582 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:11:12.582 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:11:12.582 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57262_0 00:11:12.582 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:11:12.582 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57262 00:11:12.582 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:11:12.582 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57262 00:11:12.582 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:11:12.582 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:11:12.582 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:11:12.582 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:11:12.582 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:11:12.582 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:11:12.582 element at address: 0x200003efba40 with size: 1.008118 MiB 00:11:12.582 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:11:12.582 element at address: 0x200000cff180 with size: 1.000488 MiB 00:11:12.582 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57262 00:11:12.582 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:11:12.582 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57262 00:11:12.583 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:11:12.583 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57262 00:11:12.583 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:11:12.583 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57262 00:11:12.583 element at address: 0x20000087f740 with size: 0.500488 MiB 00:11:12.583 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57262 00:11:12.583 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:11:12.583 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57262 00:11:12.583 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:11:12.583 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:11:12.583 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:11:12.583 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:11:12.583 element at address: 0x20001907c540 with size: 0.250488 MiB 00:11:12.583 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:11:12.583 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:11:12.583 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57262 00:11:12.583 element at address: 0x20000085e640 with size: 0.125488 MiB 00:11:12.583 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57262 00:11:12.583 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:11:12.583 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:11:12.583 element at address: 0x200027a65740 with size: 0.023743 MiB 00:11:12.583 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:11:12.583 element at address: 0x20000085a380 with size: 0.016113 MiB 00:11:12.583 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57262 00:11:12.583 element at address: 0x200027a6b880 with size: 0.002441 MiB 00:11:12.583 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:11:12.583 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:11:12.583 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57262 00:11:12.583 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:11:12.583 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57262 00:11:12.583 element at address: 0x20000085a180 with size: 0.000305 MiB 00:11:12.583 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57262 00:11:12.583 element at address: 0x200027a6c340 with size: 0.000305 MiB 00:11:12.583 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:11:12.583 14:37:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:11:12.583 14:37:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57262 00:11:12.583 14:37:21 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 57262 ']' 00:11:12.583 14:37:21 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 57262 00:11:12.583 14:37:21 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:11:12.583 14:37:21 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:12.583 14:37:21 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57262 00:11:12.583 killing process with pid 57262 00:11:12.583 14:37:21 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:12.583 14:37:21 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:12.583 14:37:21 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57262' 00:11:12.583 14:37:21 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 57262 00:11:12.583 14:37:21 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 57262 00:11:12.583 ************************************ 00:11:12.583 END TEST dpdk_mem_utility 00:11:12.583 ************************************ 00:11:12.583 00:11:12.583 real 0m1.385s 00:11:12.583 user 0m1.479s 00:11:12.583 sys 0m0.298s 00:11:12.583 14:37:21 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:12.583 14:37:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:12.583 14:37:21 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:12.583 14:37:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:12.583 14:37:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:12.583 14:37:21 -- common/autotest_common.sh@10 -- # set +x 00:11:12.583 ************************************ 00:11:12.583 START TEST event 00:11:12.583 ************************************ 00:11:12.583 14:37:21 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:12.842 * Looking for test storage... 00:11:12.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:12.842 14:37:21 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:12.842 14:37:21 event -- common/autotest_common.sh@1691 -- # lcov --version 00:11:12.842 14:37:21 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:12.842 14:37:21 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:12.842 14:37:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.842 14:37:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.842 14:37:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.842 14:37:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.842 14:37:21 event -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.842 14:37:21 event -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.842 14:37:21 event -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.842 14:37:21 event -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.842 14:37:21 event -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.842 14:37:21 event -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.842 14:37:21 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.842 14:37:21 event -- scripts/common.sh@344 -- # case "$op" in 00:11:12.842 14:37:21 event -- scripts/common.sh@345 -- # : 1 00:11:12.842 14:37:21 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.842 14:37:21 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.842 14:37:21 event -- scripts/common.sh@365 -- # decimal 1 00:11:12.842 14:37:21 event -- scripts/common.sh@353 -- # local d=1 00:11:12.842 14:37:21 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.842 14:37:21 event -- scripts/common.sh@355 -- # echo 1 00:11:12.842 14:37:21 event -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.842 14:37:21 event -- scripts/common.sh@366 -- # decimal 2 00:11:12.842 14:37:21 event -- scripts/common.sh@353 -- # local d=2 00:11:12.842 14:37:21 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.842 14:37:21 event -- scripts/common.sh@355 -- # echo 2 00:11:12.842 14:37:21 event -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.842 14:37:21 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.842 14:37:21 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.842 14:37:21 event -- scripts/common.sh@368 -- # return 0 00:11:12.842 14:37:21 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.842 14:37:21 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:12.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.842 --rc genhtml_branch_coverage=1 00:11:12.842 --rc genhtml_function_coverage=1 00:11:12.842 --rc genhtml_legend=1 00:11:12.842 --rc geninfo_all_blocks=1 00:11:12.842 --rc geninfo_unexecuted_blocks=1 00:11:12.842 00:11:12.842 ' 00:11:12.842 14:37:21 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:12.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.842 --rc genhtml_branch_coverage=1 00:11:12.842 --rc genhtml_function_coverage=1 00:11:12.842 --rc genhtml_legend=1 00:11:12.842 --rc geninfo_all_blocks=1 00:11:12.842 --rc geninfo_unexecuted_blocks=1 00:11:12.842 00:11:12.842 ' 00:11:12.842 14:37:21 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:12.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.842 --rc genhtml_branch_coverage=1 00:11:12.842 --rc genhtml_function_coverage=1 00:11:12.842 --rc genhtml_legend=1 00:11:12.842 --rc geninfo_all_blocks=1 00:11:12.842 --rc geninfo_unexecuted_blocks=1 00:11:12.842 00:11:12.842 ' 00:11:12.842 14:37:21 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:12.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.842 --rc genhtml_branch_coverage=1 00:11:12.842 --rc genhtml_function_coverage=1 00:11:12.842 --rc genhtml_legend=1 00:11:12.842 --rc geninfo_all_blocks=1 00:11:12.842 --rc geninfo_unexecuted_blocks=1 00:11:12.842 00:11:12.842 ' 00:11:12.842 14:37:21 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:12.842 14:37:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:11:12.842 14:37:21 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:12.842 14:37:21 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:11:12.842 14:37:21 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:12.842 14:37:21 event -- common/autotest_common.sh@10 -- # set +x 00:11:12.842 ************************************ 00:11:12.842 START TEST event_perf 00:11:12.842 ************************************ 00:11:12.842 14:37:21 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:12.842 Running I/O for 1 seconds...[2024-11-04 14:37:21.814873] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:12.842 [2024-11-04 14:37:21.815009] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57342 ] 00:11:12.842 [2024-11-04 14:37:21.954762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:13.101 [2024-11-04 14:37:21.994937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.101 [2024-11-04 14:37:21.995186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.101 [2024-11-04 14:37:21.996027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.101 [2024-11-04 14:37:21.996035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.034 Running I/O for 1 seconds... 00:11:14.034 lcore 0: 187145 00:11:14.034 lcore 1: 187143 00:11:14.034 lcore 2: 187143 00:11:14.034 lcore 3: 187145 00:11:14.034 done. 00:11:14.034 00:11:14.034 ************************************ 00:11:14.034 END TEST event_perf 00:11:14.034 ************************************ 00:11:14.034 real 0m1.228s 00:11:14.034 user 0m4.075s 00:11:14.034 sys 0m0.033s 00:11:14.034 14:37:23 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:14.034 14:37:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:11:14.034 14:37:23 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:14.034 14:37:23 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:14.034 14:37:23 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:14.034 14:37:23 event -- common/autotest_common.sh@10 -- # set +x 00:11:14.034 ************************************ 00:11:14.034 START TEST event_reactor 00:11:14.034 ************************************ 00:11:14.034 14:37:23 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:14.034 [2024-11-04 14:37:23.083300] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:14.034 [2024-11-04 14:37:23.083397] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57375 ] 00:11:14.292 [2024-11-04 14:37:23.229692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.293 [2024-11-04 14:37:23.266345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.247 test_start 00:11:15.247 oneshot 00:11:15.247 tick 100 00:11:15.247 tick 100 00:11:15.247 tick 250 00:11:15.247 tick 100 00:11:15.247 tick 100 00:11:15.247 tick 100 00:11:15.247 tick 250 00:11:15.247 tick 500 00:11:15.247 tick 100 00:11:15.247 tick 100 00:11:15.247 tick 250 00:11:15.247 tick 100 00:11:15.247 tick 100 00:11:15.247 test_end 00:11:15.247 00:11:15.247 real 0m1.229s 00:11:15.247 user 0m1.083s 00:11:15.247 sys 0m0.038s 00:11:15.247 14:37:24 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:15.247 14:37:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:11:15.247 ************************************ 00:11:15.247 END TEST event_reactor 00:11:15.248 ************************************ 00:11:15.248 14:37:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:15.248 14:37:24 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:15.248 14:37:24 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:15.248 14:37:24 event -- common/autotest_common.sh@10 -- # set +x 00:11:15.248 ************************************ 00:11:15.248 START TEST event_reactor_perf 00:11:15.248 ************************************ 00:11:15.248 14:37:24 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:15.248 [2024-11-04 14:37:24.356799] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:15.248 [2024-11-04 14:37:24.356864] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57406 ] 00:11:15.505 [2024-11-04 14:37:24.495032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.505 [2024-11-04 14:37:24.531464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.437 test_start 00:11:16.437 test_end 00:11:16.437 Performance: 382581 events per second 00:11:16.437 00:11:16.437 real 0m1.225s 00:11:16.437 user 0m1.085s 00:11:16.437 sys 0m0.032s 00:11:16.437 ************************************ 00:11:16.437 END TEST event_reactor_perf 00:11:16.437 ************************************ 00:11:16.437 14:37:25 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:16.437 14:37:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:11:16.695 14:37:25 event -- event/event.sh@49 -- # uname -s 00:11:16.695 14:37:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:11:16.695 14:37:25 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:16.695 14:37:25 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:16.695 14:37:25 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:16.695 14:37:25 event -- common/autotest_common.sh@10 -- # set +x 00:11:16.695 ************************************ 00:11:16.695 START TEST event_scheduler 00:11:16.695 ************************************ 00:11:16.695 14:37:25 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:16.695 * Looking for test storage... 00:11:16.695 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:11:16.695 14:37:25 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:16.695 14:37:25 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:16.695 14:37:25 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:11:16.695 14:37:25 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:16.695 14:37:25 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.695 14:37:25 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.695 14:37:25 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.695 14:37:25 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.695 14:37:25 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.695 14:37:25 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.695 14:37:25 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.695 14:37:25 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.695 14:37:25 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.695 14:37:25 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.695 14:37:25 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.695 14:37:25 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:11:16.695 14:37:25 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:11:16.695 14:37:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.696 14:37:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.696 14:37:25 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:11:16.696 14:37:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:11:16.696 14:37:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.696 14:37:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:11:16.696 14:37:25 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.696 14:37:25 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:11:16.696 14:37:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:11:16.696 14:37:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.696 14:37:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:11:16.696 14:37:25 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.696 14:37:25 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.696 14:37:25 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.696 14:37:25 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:11:16.696 14:37:25 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.696 14:37:25 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:16.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.696 --rc genhtml_branch_coverage=1 00:11:16.696 --rc genhtml_function_coverage=1 00:11:16.696 --rc genhtml_legend=1 00:11:16.696 --rc geninfo_all_blocks=1 00:11:16.696 --rc geninfo_unexecuted_blocks=1 00:11:16.696 00:11:16.696 ' 00:11:16.696 14:37:25 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:16.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.696 --rc genhtml_branch_coverage=1 00:11:16.696 --rc genhtml_function_coverage=1 00:11:16.696 --rc genhtml_legend=1 00:11:16.696 --rc geninfo_all_blocks=1 00:11:16.696 --rc geninfo_unexecuted_blocks=1 00:11:16.696 00:11:16.696 ' 00:11:16.696 14:37:25 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:16.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.696 --rc genhtml_branch_coverage=1 00:11:16.696 --rc genhtml_function_coverage=1 00:11:16.696 --rc genhtml_legend=1 00:11:16.696 --rc geninfo_all_blocks=1 00:11:16.696 --rc geninfo_unexecuted_blocks=1 00:11:16.696 00:11:16.696 ' 00:11:16.696 14:37:25 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:16.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.696 --rc genhtml_branch_coverage=1 00:11:16.696 --rc genhtml_function_coverage=1 00:11:16.696 --rc genhtml_legend=1 00:11:16.696 --rc geninfo_all_blocks=1 00:11:16.696 --rc geninfo_unexecuted_blocks=1 00:11:16.696 00:11:16.696 ' 00:11:16.696 14:37:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:11:16.696 14:37:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=57474 00:11:16.696 14:37:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:11:16.696 14:37:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 57474 00:11:16.696 14:37:25 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 57474 ']' 00:11:16.696 14:37:25 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.696 14:37:25 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:16.696 14:37:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:11:16.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.696 14:37:25 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.696 14:37:25 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:16.696 14:37:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:16.696 [2024-11-04 14:37:25.777440] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:16.696 [2024-11-04 14:37:25.777512] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57474 ] 00:11:16.955 [2024-11-04 14:37:25.919777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.955 [2024-11-04 14:37:25.968202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.955 [2024-11-04 14:37:25.968305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.955 [2024-11-04 14:37:25.968808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.955 [2024-11-04 14:37:25.968816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.955 14:37:26 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:16.955 14:37:26 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:11:16.955 14:37:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:11:16.955 14:37:26 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.955 14:37:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:16.955 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:16.955 POWER: Cannot set governor of lcore 0 to userspace 00:11:16.955 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:16.955 POWER: Cannot set governor of lcore 0 to performance 00:11:16.955 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:16.955 POWER: Cannot set governor of lcore 0 to userspace 00:11:16.955 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:16.955 POWER: Cannot set governor of lcore 0 to userspace 00:11:16.955 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:11:16.955 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:11:16.955 POWER: Unable to set Power Management Environment for lcore 0 00:11:16.955 [2024-11-04 14:37:26.057476] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:11:16.955 [2024-11-04 14:37:26.057486] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:11:16.955 [2024-11-04 14:37:26.057491] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:11:16.955 [2024-11-04 14:37:26.057498] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:11:16.955 [2024-11-04 14:37:26.057503] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:11:16.955 [2024-11-04 14:37:26.057507] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:11:16.955 14:37:26 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.955 14:37:26 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:11:16.955 14:37:26 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.955 14:37:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:17.214 [2024-11-04 14:37:26.096983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:17.214 [2024-11-04 14:37:26.121257] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:11:17.214 14:37:26 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.214 14:37:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:11:17.214 14:37:26 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:17.214 14:37:26 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:17.214 14:37:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:17.214 ************************************ 00:11:17.214 START TEST scheduler_create_thread 00:11:17.214 ************************************ 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.214 2 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.214 3 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.214 4 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.214 5 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.214 6 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.214 7 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.214 8 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.214 9 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.214 10 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:17.214 14:37:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:17.215 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.215 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.781 ************************************ 00:11:17.781 END TEST scheduler_create_thread 00:11:17.781 ************************************ 00:11:17.781 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.781 00:11:17.781 real 0m0.590s 00:11:17.781 user 0m0.014s 00:11:17.781 sys 0m0.001s 00:11:17.781 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:17.781 14:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.781 14:37:26 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:17.781 14:37:26 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 57474 00:11:17.781 14:37:26 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 57474 ']' 00:11:17.781 14:37:26 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 57474 00:11:17.781 14:37:26 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:11:17.781 14:37:26 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:17.781 14:37:26 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57474 00:11:17.781 killing process with pid 57474 00:11:17.781 14:37:26 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:11:17.781 14:37:26 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:11:17.781 14:37:26 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57474' 00:11:17.781 14:37:26 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 57474 00:11:17.781 14:37:26 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 57474 00:11:18.347 [2024-11-04 14:37:27.198655] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:18.347 ************************************ 00:11:18.347 END TEST event_scheduler 00:11:18.347 ************************************ 00:11:18.347 00:11:18.347 real 0m1.712s 00:11:18.347 user 0m2.168s 00:11:18.347 sys 0m0.252s 00:11:18.347 14:37:27 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:18.347 14:37:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:18.347 14:37:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:11:18.347 14:37:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:18.347 14:37:27 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:18.347 14:37:27 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:18.347 14:37:27 event -- common/autotest_common.sh@10 -- # set +x 00:11:18.347 ************************************ 00:11:18.347 START TEST app_repeat 00:11:18.347 ************************************ 00:11:18.347 14:37:27 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:11:18.347 14:37:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:18.347 14:37:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:18.347 14:37:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:11:18.347 14:37:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:18.347 14:37:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:11:18.347 14:37:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:11:18.347 14:37:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:11:18.347 Process app_repeat pid: 57539 00:11:18.347 spdk_app_start Round 0 00:11:18.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:18.347 14:37:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=57539 00:11:18.347 14:37:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:18.347 14:37:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57539' 00:11:18.347 14:37:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:18.347 14:37:27 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:18.347 14:37:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:18.347 14:37:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57539 /var/tmp/spdk-nbd.sock 00:11:18.347 14:37:27 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57539 ']' 00:11:18.347 14:37:27 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:18.347 14:37:27 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:18.347 14:37:27 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:18.347 14:37:27 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:18.347 14:37:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:18.347 [2024-11-04 14:37:27.381038] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:18.348 [2024-11-04 14:37:27.381109] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57539 ] 00:11:18.632 [2024-11-04 14:37:27.524286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:18.632 [2024-11-04 14:37:27.574593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.632 [2024-11-04 14:37:27.574622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.632 [2024-11-04 14:37:27.613062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:18.632 14:37:27 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:18.632 14:37:27 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:11:18.632 14:37:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:18.925 Malloc0 00:11:18.925 14:37:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:19.184 Malloc1 00:11:19.184 14:37:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:19.184 14:37:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:19.184 14:37:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:19.184 14:37:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:19.184 14:37:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:19.184 14:37:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:19.184 14:37:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:19.184 14:37:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:19.184 14:37:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:19.184 14:37:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:19.184 14:37:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:19.184 14:37:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:19.184 14:37:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:19.184 14:37:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:19.184 14:37:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:19.184 14:37:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:19.442 /dev/nbd0 00:11:19.442 14:37:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:19.442 14:37:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:19.442 14:37:28 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:19.442 14:37:28 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:11:19.442 14:37:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:19.442 14:37:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:19.442 14:37:28 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:19.442 14:37:28 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:11:19.442 14:37:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:19.442 14:37:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:19.442 14:37:28 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:19.442 1+0 records in 00:11:19.442 1+0 records out 00:11:19.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202438 s, 20.2 MB/s 00:11:19.442 14:37:28 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:19.442 14:37:28 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:11:19.442 14:37:28 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:19.442 14:37:28 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:19.442 14:37:28 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:11:19.442 14:37:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:19.442 14:37:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:19.442 14:37:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:19.699 /dev/nbd1 00:11:19.699 14:37:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:19.699 14:37:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:19.699 14:37:28 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:11:19.699 14:37:28 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:11:19.699 14:37:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:19.699 14:37:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:19.699 14:37:28 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:11:19.699 14:37:28 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:11:19.700 14:37:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:19.700 14:37:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:19.700 14:37:28 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:19.700 1+0 records in 00:11:19.700 1+0 records out 00:11:19.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285395 s, 14.4 MB/s 00:11:19.700 14:37:28 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:19.700 14:37:28 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:11:19.700 14:37:28 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:19.700 14:37:28 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:19.700 14:37:28 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:11:19.700 14:37:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:19.700 14:37:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:19.700 14:37:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:19.700 14:37:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:19.700 14:37:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:19.957 14:37:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:19.957 { 00:11:19.957 "nbd_device": "/dev/nbd0", 00:11:19.957 "bdev_name": "Malloc0" 00:11:19.957 }, 00:11:19.957 { 00:11:19.957 "nbd_device": "/dev/nbd1", 00:11:19.957 "bdev_name": "Malloc1" 00:11:19.957 } 00:11:19.957 ]' 00:11:19.957 14:37:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:19.957 { 00:11:19.957 "nbd_device": "/dev/nbd0", 00:11:19.957 "bdev_name": "Malloc0" 00:11:19.957 }, 00:11:19.957 { 00:11:19.957 "nbd_device": "/dev/nbd1", 00:11:19.957 "bdev_name": "Malloc1" 00:11:19.957 } 00:11:19.957 ]' 00:11:19.957 14:37:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:19.957 14:37:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:19.957 /dev/nbd1' 00:11:19.957 14:37:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:19.957 /dev/nbd1' 00:11:19.957 14:37:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:19.957 14:37:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:19.957 14:37:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:19.957 14:37:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:19.957 14:37:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:19.957 14:37:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:19.957 14:37:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:19.957 14:37:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:19.957 14:37:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:19.957 14:37:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:19.957 14:37:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:19.957 14:37:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:19.957 256+0 records in 00:11:19.957 256+0 records out 00:11:19.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00683948 s, 153 MB/s 00:11:19.957 14:37:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:19.957 14:37:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:19.957 256+0 records in 00:11:19.957 256+0 records out 00:11:19.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168882 s, 62.1 MB/s 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:19.958 256+0 records in 00:11:19.958 256+0 records out 00:11:19.958 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021546 s, 48.7 MB/s 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:19.958 14:37:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:20.215 14:37:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:20.215 14:37:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:20.215 14:37:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:20.215 14:37:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:20.215 14:37:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:20.215 14:37:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:20.215 14:37:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:20.215 14:37:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:20.215 14:37:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:20.215 14:37:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:20.473 14:37:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:20.473 14:37:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:20.729 14:37:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:20.729 [2024-11-04 14:37:29.856970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:20.985 [2024-11-04 14:37:29.892215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.985 [2024-11-04 14:37:29.892233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.985 [2024-11-04 14:37:29.923819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:20.985 [2024-11-04 14:37:29.923879] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:20.985 [2024-11-04 14:37:29.923886] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:24.264 spdk_app_start Round 1 00:11:24.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:24.264 14:37:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:24.264 14:37:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:24.264 14:37:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57539 /var/tmp/spdk-nbd.sock 00:11:24.264 14:37:32 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57539 ']' 00:11:24.264 14:37:32 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:24.264 14:37:32 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:24.264 14:37:32 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:24.264 14:37:32 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:24.264 14:37:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:24.264 14:37:33 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:24.264 14:37:33 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:11:24.264 14:37:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:24.264 Malloc0 00:11:24.264 14:37:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:24.522 Malloc1 00:11:24.522 14:37:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:24.522 14:37:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:24.522 14:37:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:24.522 14:37:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:24.522 14:37:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:24.522 14:37:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:24.522 14:37:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:24.522 14:37:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:24.522 14:37:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:24.522 14:37:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:24.522 14:37:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:24.522 14:37:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:24.522 14:37:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:24.522 14:37:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:24.522 14:37:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:24.522 14:37:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:24.779 /dev/nbd0 00:11:24.779 14:37:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:24.779 14:37:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:24.779 14:37:33 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:24.779 14:37:33 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:11:24.779 14:37:33 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:24.779 14:37:33 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:24.779 14:37:33 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:24.779 14:37:33 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:11:24.779 14:37:33 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:24.779 14:37:33 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:24.779 14:37:33 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:24.779 1+0 records in 00:11:24.779 1+0 records out 00:11:24.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027163 s, 15.1 MB/s 00:11:24.779 14:37:33 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:24.779 14:37:33 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:11:24.779 14:37:33 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:24.779 14:37:33 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:24.779 14:37:33 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:11:24.780 14:37:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:24.780 14:37:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:24.780 14:37:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:25.049 /dev/nbd1 00:11:25.049 14:37:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:25.049 14:37:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:25.049 14:37:33 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:11:25.049 14:37:33 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:11:25.049 14:37:33 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:25.049 14:37:33 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:25.049 14:37:33 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:11:25.049 14:37:33 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:11:25.049 14:37:33 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:25.049 14:37:33 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:25.049 14:37:33 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:25.049 1+0 records in 00:11:25.049 1+0 records out 00:11:25.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000130966 s, 31.3 MB/s 00:11:25.049 14:37:33 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:25.049 14:37:33 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:11:25.049 14:37:33 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:25.049 14:37:33 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:25.049 14:37:33 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:11:25.049 14:37:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:25.049 14:37:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:25.049 14:37:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:25.049 14:37:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:25.049 14:37:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:25.049 14:37:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:25.049 { 00:11:25.049 "nbd_device": "/dev/nbd0", 00:11:25.049 "bdev_name": "Malloc0" 00:11:25.049 }, 00:11:25.049 { 00:11:25.049 "nbd_device": "/dev/nbd1", 00:11:25.049 "bdev_name": "Malloc1" 00:11:25.049 } 00:11:25.049 ]' 00:11:25.049 14:37:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:25.049 { 00:11:25.049 "nbd_device": "/dev/nbd0", 00:11:25.049 "bdev_name": "Malloc0" 00:11:25.049 }, 00:11:25.049 { 00:11:25.049 "nbd_device": "/dev/nbd1", 00:11:25.049 "bdev_name": "Malloc1" 00:11:25.049 } 00:11:25.049 ]' 00:11:25.049 14:37:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:25.327 /dev/nbd1' 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:25.327 /dev/nbd1' 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:25.327 256+0 records in 00:11:25.327 256+0 records out 00:11:25.327 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00643303 s, 163 MB/s 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:25.327 256+0 records in 00:11:25.327 256+0 records out 00:11:25.327 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149284 s, 70.2 MB/s 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:25.327 256+0 records in 00:11:25.327 256+0 records out 00:11:25.327 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138278 s, 75.8 MB/s 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.327 14:37:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:25.585 14:37:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:25.843 14:37:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:25.843 14:37:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:25.843 14:37:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:25.843 14:37:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:25.843 14:37:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:25.843 14:37:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:25.843 14:37:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:25.843 14:37:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:25.843 14:37:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:25.843 14:37:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:25.843 14:37:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:25.843 14:37:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:25.843 14:37:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:26.100 14:37:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:26.100 [2024-11-04 14:37:35.162431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:26.100 [2024-11-04 14:37:35.194212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.100 [2024-11-04 14:37:35.194217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.100 [2024-11-04 14:37:35.224857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:26.100 [2024-11-04 14:37:35.224912] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:26.100 [2024-11-04 14:37:35.224918] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:29.391 14:37:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:29.391 spdk_app_start Round 2 00:11:29.391 14:37:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:29.391 14:37:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57539 /var/tmp/spdk-nbd.sock 00:11:29.391 14:37:38 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57539 ']' 00:11:29.391 14:37:38 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:29.391 14:37:38 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:29.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:29.391 14:37:38 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:29.391 14:37:38 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:29.391 14:37:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:29.391 14:37:38 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:29.391 14:37:38 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:11:29.391 14:37:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:29.391 Malloc0 00:11:29.391 14:37:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:29.649 Malloc1 00:11:29.649 14:37:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:29.649 14:37:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.649 14:37:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:29.649 14:37:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:29.649 14:37:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:29.649 14:37:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:29.649 14:37:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:29.649 14:37:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.649 14:37:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:29.649 14:37:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:29.649 14:37:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:29.649 14:37:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:29.649 14:37:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:29.649 14:37:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:29.649 14:37:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:29.649 14:37:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:29.907 /dev/nbd0 00:11:29.907 14:37:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:29.907 14:37:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:29.907 14:37:38 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:29.907 14:37:38 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:11:29.907 14:37:38 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:29.907 14:37:38 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:29.907 14:37:38 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:29.907 14:37:38 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:11:29.907 14:37:38 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:29.907 14:37:38 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:29.907 14:37:38 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:29.907 1+0 records in 00:11:29.907 1+0 records out 00:11:29.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212223 s, 19.3 MB/s 00:11:29.907 14:37:38 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:29.907 14:37:38 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:11:29.907 14:37:38 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:29.907 14:37:38 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:29.907 14:37:38 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:11:29.907 14:37:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:29.907 14:37:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:29.907 14:37:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:30.164 /dev/nbd1 00:11:30.164 14:37:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:30.164 14:37:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:30.164 14:37:39 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:11:30.164 14:37:39 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:11:30.164 14:37:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:30.164 14:37:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:30.164 14:37:39 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:11:30.164 14:37:39 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:11:30.164 14:37:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:30.164 14:37:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:30.164 14:37:39 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:30.164 1+0 records in 00:11:30.164 1+0 records out 00:11:30.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318844 s, 12.8 MB/s 00:11:30.164 14:37:39 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:30.164 14:37:39 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:11:30.164 14:37:39 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:30.164 14:37:39 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:30.164 14:37:39 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:11:30.164 14:37:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:30.165 14:37:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:30.165 14:37:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:30.165 14:37:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:30.165 14:37:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:30.422 { 00:11:30.422 "nbd_device": "/dev/nbd0", 00:11:30.422 "bdev_name": "Malloc0" 00:11:30.422 }, 00:11:30.422 { 00:11:30.422 "nbd_device": "/dev/nbd1", 00:11:30.422 "bdev_name": "Malloc1" 00:11:30.422 } 00:11:30.422 ]' 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:30.422 { 00:11:30.422 "nbd_device": "/dev/nbd0", 00:11:30.422 "bdev_name": "Malloc0" 00:11:30.422 }, 00:11:30.422 { 00:11:30.422 "nbd_device": "/dev/nbd1", 00:11:30.422 "bdev_name": "Malloc1" 00:11:30.422 } 00:11:30.422 ]' 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:30.422 /dev/nbd1' 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:30.422 /dev/nbd1' 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:30.422 256+0 records in 00:11:30.422 256+0 records out 00:11:30.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104498 s, 100 MB/s 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:30.422 256+0 records in 00:11:30.422 256+0 records out 00:11:30.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129986 s, 80.7 MB/s 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:30.422 256+0 records in 00:11:30.422 256+0 records out 00:11:30.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142184 s, 73.7 MB/s 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.422 14:37:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:30.679 14:37:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:30.679 14:37:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:30.679 14:37:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:30.679 14:37:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.679 14:37:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.679 14:37:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:30.679 14:37:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:30.679 14:37:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.679 14:37:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.679 14:37:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:30.936 14:37:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:30.936 14:37:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:30.936 14:37:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:30.936 14:37:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.936 14:37:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.936 14:37:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:30.936 14:37:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:30.936 14:37:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.936 14:37:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:30.936 14:37:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:30.936 14:37:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:31.194 14:37:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:31.194 14:37:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:31.194 14:37:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:31.194 14:37:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:31.194 14:37:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:31.194 14:37:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:31.194 14:37:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:31.194 14:37:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:31.194 14:37:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:31.194 14:37:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:31.194 14:37:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:31.194 14:37:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:31.194 14:37:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:31.451 14:37:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:31.451 [2024-11-04 14:37:40.506247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:31.451 [2024-11-04 14:37:40.537756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.451 [2024-11-04 14:37:40.538081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.451 [2024-11-04 14:37:40.567367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:31.451 [2024-11-04 14:37:40.567417] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:31.451 [2024-11-04 14:37:40.567424] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:34.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:34.751 14:37:43 event.app_repeat -- event/event.sh@38 -- # waitforlisten 57539 /var/tmp/spdk-nbd.sock 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57539 ']' 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:11:34.751 14:37:43 event.app_repeat -- event/event.sh@39 -- # killprocess 57539 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 57539 ']' 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 57539 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57539 00:11:34.751 killing process with pid 57539 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57539' 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@971 -- # kill 57539 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@976 -- # wait 57539 00:11:34.751 spdk_app_start is called in Round 0. 00:11:34.751 Shutdown signal received, stop current app iteration 00:11:34.751 Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 reinitialization... 00:11:34.751 spdk_app_start is called in Round 1. 00:11:34.751 Shutdown signal received, stop current app iteration 00:11:34.751 Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 reinitialization... 00:11:34.751 spdk_app_start is called in Round 2. 00:11:34.751 Shutdown signal received, stop current app iteration 00:11:34.751 Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 reinitialization... 00:11:34.751 spdk_app_start is called in Round 3. 00:11:34.751 Shutdown signal received, stop current app iteration 00:11:34.751 14:37:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:34.751 14:37:43 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:34.751 00:11:34.751 real 0m16.426s 00:11:34.751 user 0m36.919s 00:11:34.751 sys 0m2.000s 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:34.751 ************************************ 00:11:34.751 END TEST app_repeat 00:11:34.751 ************************************ 00:11:34.751 14:37:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:34.751 14:37:43 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:34.751 14:37:43 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:34.751 14:37:43 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:34.751 14:37:43 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:34.751 14:37:43 event -- common/autotest_common.sh@10 -- # set +x 00:11:34.751 ************************************ 00:11:34.751 START TEST cpu_locks 00:11:34.751 ************************************ 00:11:34.751 14:37:43 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:35.027 * Looking for test storage... 00:11:35.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:35.027 14:37:43 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:35.027 14:37:43 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:11:35.027 14:37:43 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:35.027 14:37:43 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.027 14:37:43 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:11:35.027 14:37:43 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.027 14:37:43 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:35.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.027 --rc genhtml_branch_coverage=1 00:11:35.027 --rc genhtml_function_coverage=1 00:11:35.027 --rc genhtml_legend=1 00:11:35.027 --rc geninfo_all_blocks=1 00:11:35.027 --rc geninfo_unexecuted_blocks=1 00:11:35.027 00:11:35.027 ' 00:11:35.027 14:37:43 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:35.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.027 --rc genhtml_branch_coverage=1 00:11:35.027 --rc genhtml_function_coverage=1 00:11:35.027 --rc genhtml_legend=1 00:11:35.027 --rc geninfo_all_blocks=1 00:11:35.027 --rc geninfo_unexecuted_blocks=1 00:11:35.027 00:11:35.027 ' 00:11:35.027 14:37:43 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:35.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.027 --rc genhtml_branch_coverage=1 00:11:35.027 --rc genhtml_function_coverage=1 00:11:35.027 --rc genhtml_legend=1 00:11:35.027 --rc geninfo_all_blocks=1 00:11:35.027 --rc geninfo_unexecuted_blocks=1 00:11:35.027 00:11:35.027 ' 00:11:35.027 14:37:43 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:35.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.027 --rc genhtml_branch_coverage=1 00:11:35.027 --rc genhtml_function_coverage=1 00:11:35.028 --rc genhtml_legend=1 00:11:35.028 --rc geninfo_all_blocks=1 00:11:35.028 --rc geninfo_unexecuted_blocks=1 00:11:35.028 00:11:35.028 ' 00:11:35.028 14:37:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:35.028 14:37:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:35.028 14:37:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:35.028 14:37:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:35.028 14:37:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:35.028 14:37:43 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:35.028 14:37:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:35.028 ************************************ 00:11:35.028 START TEST default_locks 00:11:35.028 ************************************ 00:11:35.028 14:37:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:11:35.028 14:37:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57954 00:11:35.028 14:37:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 57954 00:11:35.028 14:37:43 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 57954 ']' 00:11:35.028 14:37:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:35.028 14:37:43 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.028 14:37:43 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:35.028 14:37:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.028 14:37:43 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:35.028 14:37:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:35.028 [2024-11-04 14:37:44.010308] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:35.028 [2024-11-04 14:37:44.010412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57954 ] 00:11:35.028 [2024-11-04 14:37:44.156110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.286 [2024-11-04 14:37:44.192116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.286 [2024-11-04 14:37:44.238137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:35.853 14:37:44 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:35.853 14:37:44 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:11:35.853 14:37:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 57954 00:11:35.853 14:37:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 57954 00:11:35.853 14:37:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:36.114 14:37:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 57954 00:11:36.114 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 57954 ']' 00:11:36.114 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 57954 00:11:36.114 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:11:36.114 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:36.114 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57954 00:11:36.114 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:36.114 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:36.114 killing process with pid 57954 00:11:36.114 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57954' 00:11:36.114 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 57954 00:11:36.114 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 57954 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57954 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57954 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 57954 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 57954 ']' 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:36.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:36.372 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (57954) - No such process 00:11:36.372 ERROR: process (pid: 57954) is no longer running 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:36.372 00:11:36.372 real 0m1.407s 00:11:36.372 user 0m1.549s 00:11:36.372 sys 0m0.354s 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:36.372 14:37:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:36.372 ************************************ 00:11:36.372 END TEST default_locks 00:11:36.372 ************************************ 00:11:36.372 14:37:45 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:36.372 14:37:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:36.372 14:37:45 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:36.372 14:37:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:36.372 ************************************ 00:11:36.372 START TEST default_locks_via_rpc 00:11:36.372 ************************************ 00:11:36.372 14:37:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:11:36.372 14:37:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58002 00:11:36.372 14:37:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:36.372 14:37:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58002 00:11:36.372 14:37:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58002 ']' 00:11:36.372 14:37:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.372 14:37:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:36.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.372 14:37:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.372 14:37:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:36.372 14:37:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.372 [2024-11-04 14:37:45.445728] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:36.373 [2024-11-04 14:37:45.445805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58002 ] 00:11:36.632 [2024-11-04 14:37:45.582839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.632 [2024-11-04 14:37:45.619150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.632 [2024-11-04 14:37:45.665992] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:37.199 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:37.199 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:37.199 14:37:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:37.199 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.199 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.200 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.200 14:37:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:37.200 14:37:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:37.200 14:37:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:37.200 14:37:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:37.200 14:37:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:37.200 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.200 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.459 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.459 14:37:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58002 00:11:37.459 14:37:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58002 00:11:37.459 14:37:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:37.459 14:37:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58002 00:11:37.459 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58002 ']' 00:11:37.459 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58002 00:11:37.459 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:11:37.459 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:37.459 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58002 00:11:37.459 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:37.459 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:37.459 killing process with pid 58002 00:11:37.459 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58002' 00:11:37.459 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58002 00:11:37.459 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58002 00:11:37.720 00:11:37.720 real 0m1.365s 00:11:37.720 user 0m1.475s 00:11:37.720 sys 0m0.347s 00:11:37.720 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:37.720 ************************************ 00:11:37.720 END TEST default_locks_via_rpc 00:11:37.720 ************************************ 00:11:37.720 14:37:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.720 14:37:46 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:37.720 14:37:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:37.720 14:37:46 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:37.720 14:37:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:37.720 ************************************ 00:11:37.720 START TEST non_locking_app_on_locked_coremask 00:11:37.720 ************************************ 00:11:37.720 14:37:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:11:37.720 14:37:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58042 00:11:37.720 14:37:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58042 /var/tmp/spdk.sock 00:11:37.720 14:37:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58042 ']' 00:11:37.720 14:37:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.720 14:37:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:37.720 14:37:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:37.720 14:37:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.720 14:37:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:37.720 14:37:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:37.982 [2024-11-04 14:37:46.860166] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:37.982 [2024-11-04 14:37:46.860235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58042 ] 00:11:37.982 [2024-11-04 14:37:47.001500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.982 [2024-11-04 14:37:47.038700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.982 [2024-11-04 14:37:47.087116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:38.919 14:37:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:38.919 14:37:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:38.919 14:37:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58058 00:11:38.919 14:37:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58058 /var/tmp/spdk2.sock 00:11:38.919 14:37:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58058 ']' 00:11:38.919 14:37:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:38.919 14:37:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:38.919 14:37:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:38.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:38.919 14:37:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:38.919 14:37:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:38.919 14:37:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:38.919 [2024-11-04 14:37:47.768503] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:38.919 [2024-11-04 14:37:47.768572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58058 ] 00:11:38.919 [2024-11-04 14:37:47.921858] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:38.919 [2024-11-04 14:37:47.921907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.919 [2024-11-04 14:37:47.995001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.177 [2024-11-04 14:37:48.092122] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:39.743 14:37:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:39.743 14:37:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:39.743 14:37:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58042 00:11:39.743 14:37:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58042 00:11:39.743 14:37:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:40.000 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58042 00:11:40.000 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58042 ']' 00:11:40.000 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58042 00:11:40.000 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:11:40.001 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:40.001 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58042 00:11:40.258 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:40.258 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:40.258 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58042' 00:11:40.258 killing process with pid 58042 00:11:40.258 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58042 00:11:40.259 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58042 00:11:40.553 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58058 00:11:40.553 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58058 ']' 00:11:40.553 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58058 00:11:40.553 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:11:40.553 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:40.553 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58058 00:11:40.553 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:40.553 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:40.553 killing process with pid 58058 00:11:40.553 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58058' 00:11:40.553 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58058 00:11:40.553 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58058 00:11:40.813 00:11:40.813 real 0m2.930s 00:11:40.813 user 0m3.392s 00:11:40.813 sys 0m0.665s 00:11:40.813 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:40.813 14:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:40.813 ************************************ 00:11:40.813 END TEST non_locking_app_on_locked_coremask 00:11:40.813 ************************************ 00:11:40.813 14:37:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:40.813 14:37:49 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:40.813 14:37:49 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:40.813 14:37:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:40.813 ************************************ 00:11:40.813 START TEST locking_app_on_unlocked_coremask 00:11:40.813 ************************************ 00:11:40.813 14:37:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:11:40.813 14:37:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58114 00:11:40.813 14:37:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58114 /var/tmp/spdk.sock 00:11:40.813 14:37:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58114 ']' 00:11:40.813 14:37:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:40.813 14:37:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.813 14:37:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:40.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.813 14:37:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.813 14:37:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:40.813 14:37:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:40.813 [2024-11-04 14:37:49.824844] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:40.813 [2024-11-04 14:37:49.824903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58114 ] 00:11:41.071 [2024-11-04 14:37:49.958888] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:41.071 [2024-11-04 14:37:49.958932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.071 [2024-11-04 14:37:49.990602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.071 [2024-11-04 14:37:50.035323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:41.636 14:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:41.636 14:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:41.637 14:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58130 00:11:41.637 14:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:41.637 14:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58130 /var/tmp/spdk2.sock 00:11:41.637 14:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58130 ']' 00:11:41.637 14:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:41.637 14:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:41.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:41.637 14:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:41.637 14:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:41.637 14:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:41.637 [2024-11-04 14:37:50.742966] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:41.637 [2024-11-04 14:37:50.743034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58130 ] 00:11:41.899 [2024-11-04 14:37:50.889446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.899 [2024-11-04 14:37:50.952771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.157 [2024-11-04 14:37:51.040829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:42.416 14:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:42.416 14:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:42.416 14:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58130 00:11:42.416 14:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:42.416 14:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58130 00:11:42.981 14:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58114 00:11:42.981 14:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58114 ']' 00:11:42.981 14:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58114 00:11:42.981 14:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:11:42.981 14:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:42.981 14:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58114 00:11:42.981 14:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:42.981 14:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:42.981 killing process with pid 58114 00:11:42.981 14:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58114' 00:11:42.981 14:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58114 00:11:42.981 14:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58114 00:11:43.238 14:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58130 00:11:43.238 14:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58130 ']' 00:11:43.238 14:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58130 00:11:43.238 14:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:11:43.238 14:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:43.238 14:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58130 00:11:43.238 14:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:43.238 14:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:43.238 killing process with pid 58130 00:11:43.238 14:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58130' 00:11:43.238 14:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58130 00:11:43.238 14:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58130 00:11:43.496 00:11:43.496 real 0m2.685s 00:11:43.496 user 0m3.044s 00:11:43.496 sys 0m0.630s 00:11:43.496 14:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:43.496 ************************************ 00:11:43.496 END TEST locking_app_on_unlocked_coremask 00:11:43.496 14:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:43.496 ************************************ 00:11:43.496 14:37:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:43.496 14:37:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:43.496 14:37:52 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:43.496 14:37:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:43.496 ************************************ 00:11:43.496 START TEST locking_app_on_locked_coremask 00:11:43.496 ************************************ 00:11:43.496 14:37:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:11:43.496 14:37:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58186 00:11:43.496 14:37:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58186 /var/tmp/spdk.sock 00:11:43.496 14:37:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58186 ']' 00:11:43.496 14:37:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.496 14:37:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:43.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.496 14:37:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.496 14:37:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:43.497 14:37:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:43.497 14:37:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:43.497 [2024-11-04 14:37:52.569481] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:43.497 [2024-11-04 14:37:52.569545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58186 ] 00:11:43.756 [2024-11-04 14:37:52.706837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.756 [2024-11-04 14:37:52.742247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.756 [2024-11-04 14:37:52.786634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:44.328 14:37:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:44.328 14:37:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:44.328 14:37:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58202 00:11:44.328 14:37:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:44.328 14:37:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58202 /var/tmp/spdk2.sock 00:11:44.328 14:37:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:11:44.328 14:37:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58202 /var/tmp/spdk2.sock 00:11:44.328 14:37:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:44.328 14:37:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:44.328 14:37:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:44.328 14:37:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:44.328 14:37:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58202 /var/tmp/spdk2.sock 00:11:44.328 14:37:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58202 ']' 00:11:44.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:44.328 14:37:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:44.328 14:37:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:44.328 14:37:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:44.328 14:37:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:44.328 14:37:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:44.588 [2024-11-04 14:37:53.490987] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:44.588 [2024-11-04 14:37:53.491054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58202 ] 00:11:44.588 [2024-11-04 14:37:53.643809] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58186 has claimed it. 00:11:44.588 [2024-11-04 14:37:53.643864] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:45.162 ERROR: process (pid: 58202) is no longer running 00:11:45.162 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58202) - No such process 00:11:45.162 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:45.162 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:11:45.162 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:11:45.162 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:45.162 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:45.162 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:45.162 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58186 00:11:45.162 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:45.162 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58186 00:11:45.421 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58186 00:11:45.421 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58186 ']' 00:11:45.421 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58186 00:11:45.421 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:11:45.421 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:45.421 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58186 00:11:45.421 killing process with pid 58186 00:11:45.421 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:45.421 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:45.421 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58186' 00:11:45.421 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58186 00:11:45.421 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58186 00:11:45.678 00:11:45.678 real 0m2.061s 00:11:45.678 user 0m2.406s 00:11:45.678 sys 0m0.382s 00:11:45.678 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:45.678 ************************************ 00:11:45.678 END TEST locking_app_on_locked_coremask 00:11:45.678 ************************************ 00:11:45.678 14:37:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:45.678 14:37:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:45.678 14:37:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:45.678 14:37:54 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:45.678 14:37:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:45.678 ************************************ 00:11:45.678 START TEST locking_overlapped_coremask 00:11:45.678 ************************************ 00:11:45.678 14:37:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:11:45.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.678 14:37:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58242 00:11:45.678 14:37:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58242 /var/tmp/spdk.sock 00:11:45.678 14:37:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:45.678 14:37:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58242 ']' 00:11:45.678 14:37:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.678 14:37:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:45.678 14:37:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.678 14:37:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:45.678 14:37:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:45.678 [2024-11-04 14:37:54.676008] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:45.678 [2024-11-04 14:37:54.676182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58242 ] 00:11:45.678 [2024-11-04 14:37:54.813312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:45.936 [2024-11-04 14:37:54.862552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.936 [2024-11-04 14:37:54.862451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.936 [2024-11-04 14:37:54.862549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.936 [2024-11-04 14:37:54.908977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:46.514 14:37:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:46.514 14:37:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:46.514 14:37:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58260 00:11:46.514 14:37:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58260 /var/tmp/spdk2.sock 00:11:46.514 14:37:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:11:46.514 14:37:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58260 /var/tmp/spdk2.sock 00:11:46.514 14:37:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:46.514 14:37:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:46.514 14:37:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.514 14:37:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:46.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:46.514 14:37:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.514 14:37:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58260 /var/tmp/spdk2.sock 00:11:46.514 14:37:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58260 ']' 00:11:46.514 14:37:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:46.514 14:37:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:46.514 14:37:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:46.514 14:37:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:46.514 14:37:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:46.514 [2024-11-04 14:37:55.636863] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:46.514 [2024-11-04 14:37:55.637034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58260 ] 00:11:46.774 [2024-11-04 14:37:55.788816] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58242 has claimed it. 00:11:46.774 [2024-11-04 14:37:55.788874] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:47.341 ERROR: process (pid: 58260) is no longer running 00:11:47.341 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58260) - No such process 00:11:47.341 14:37:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:47.341 14:37:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:11:47.341 14:37:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:11:47.341 14:37:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:47.341 14:37:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:47.341 14:37:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:47.341 14:37:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:47.341 14:37:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:47.341 14:37:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:47.341 14:37:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:47.341 14:37:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58242 00:11:47.341 14:37:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 58242 ']' 00:11:47.342 14:37:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 58242 00:11:47.342 14:37:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:11:47.342 14:37:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:47.342 14:37:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58242 00:11:47.342 killing process with pid 58242 00:11:47.342 14:37:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:47.342 14:37:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:47.342 14:37:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58242' 00:11:47.342 14:37:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 58242 00:11:47.342 14:37:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 58242 00:11:47.603 ************************************ 00:11:47.603 END TEST locking_overlapped_coremask 00:11:47.603 ************************************ 00:11:47.603 00:11:47.603 real 0m1.862s 00:11:47.603 user 0m5.331s 00:11:47.603 sys 0m0.270s 00:11:47.603 14:37:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:47.603 14:37:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:47.603 14:37:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:47.603 14:37:56 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:47.603 14:37:56 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:47.603 14:37:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:47.603 ************************************ 00:11:47.603 START TEST locking_overlapped_coremask_via_rpc 00:11:47.603 ************************************ 00:11:47.603 14:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:11:47.603 14:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58300 00:11:47.603 14:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58300 /var/tmp/spdk.sock 00:11:47.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.603 14:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58300 ']' 00:11:47.603 14:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.603 14:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:47.603 14:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.603 14:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:47.603 14:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:47.603 14:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.603 [2024-11-04 14:37:56.577334] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:47.603 [2024-11-04 14:37:56.577545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58300 ] 00:11:47.603 [2024-11-04 14:37:56.718138] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:47.603 [2024-11-04 14:37:56.718195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:47.864 [2024-11-04 14:37:56.756561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.864 [2024-11-04 14:37:56.756798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.864 [2024-11-04 14:37:56.756785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.864 [2024-11-04 14:37:56.804709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:48.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:48.429 14:37:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:48.429 14:37:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:48.429 14:37:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58307 00:11:48.429 14:37:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58307 /var/tmp/spdk2.sock 00:11:48.429 14:37:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58307 ']' 00:11:48.429 14:37:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:48.429 14:37:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:48.429 14:37:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:48.429 14:37:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:48.429 14:37:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:48.429 14:37:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.429 [2024-11-04 14:37:57.402945] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:48.429 [2024-11-04 14:37:57.403123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58307 ] 00:11:48.429 [2024-11-04 14:37:57.557010] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:48.429 [2024-11-04 14:37:57.557057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:48.686 [2024-11-04 14:37:57.630816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.686 [2024-11-04 14:37:57.634672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.686 [2024-11-04 14:37:57.634675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:48.686 [2024-11-04 14:37:57.721638] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.251 [2024-11-04 14:37:58.304716] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58300 has claimed it. 00:11:49.251 request: 00:11:49.251 { 00:11:49.251 "method": "framework_enable_cpumask_locks", 00:11:49.251 "req_id": 1 00:11:49.251 } 00:11:49.251 Got JSON-RPC error response 00:11:49.251 response: 00:11:49.251 { 00:11:49.251 "code": -32603, 00:11:49.251 "message": "Failed to claim CPU core: 2" 00:11:49.251 } 00:11:49.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58300 /var/tmp/spdk.sock 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58300 ']' 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:49.251 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:49.510 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:49.510 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:49.510 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58307 /var/tmp/spdk2.sock 00:11:49.510 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58307 ']' 00:11:49.510 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:49.510 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:49.510 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:49.510 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:49.510 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.768 ************************************ 00:11:49.768 END TEST locking_overlapped_coremask_via_rpc 00:11:49.768 ************************************ 00:11:49.768 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:49.768 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:49.768 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:49.769 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:49.769 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:49.769 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:49.769 00:11:49.769 real 0m2.165s 00:11:49.769 user 0m0.960s 00:11:49.769 sys 0m0.133s 00:11:49.769 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:49.769 14:37:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.769 14:37:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:49.769 14:37:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58300 ]] 00:11:49.769 14:37:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58300 00:11:49.769 14:37:58 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58300 ']' 00:11:49.769 14:37:58 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58300 00:11:49.769 14:37:58 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:11:49.769 14:37:58 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:49.769 14:37:58 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58300 00:11:49.769 killing process with pid 58300 00:11:49.769 14:37:58 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:49.769 14:37:58 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:49.769 14:37:58 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58300' 00:11:49.769 14:37:58 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 58300 00:11:49.769 14:37:58 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 58300 00:11:50.027 14:37:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58307 ]] 00:11:50.027 14:37:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58307 00:11:50.027 14:37:58 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58307 ']' 00:11:50.027 14:37:58 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58307 00:11:50.027 14:37:58 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:11:50.027 14:37:58 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:50.027 14:37:58 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58307 00:11:50.027 killing process with pid 58307 00:11:50.027 14:37:58 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:11:50.027 14:37:58 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:11:50.027 14:37:58 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58307' 00:11:50.027 14:37:58 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 58307 00:11:50.027 14:37:58 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 58307 00:11:50.285 14:37:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:50.285 14:37:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:50.285 14:37:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58300 ]] 00:11:50.285 14:37:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58300 00:11:50.285 14:37:59 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58300 ']' 00:11:50.285 14:37:59 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58300 00:11:50.285 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (58300) - No such process 00:11:50.285 Process with pid 58300 is not found 00:11:50.285 14:37:59 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 58300 is not found' 00:11:50.285 Process with pid 58307 is not found 00:11:50.285 14:37:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58307 ]] 00:11:50.285 14:37:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58307 00:11:50.285 14:37:59 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58307 ']' 00:11:50.285 14:37:59 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58307 00:11:50.285 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (58307) - No such process 00:11:50.285 14:37:59 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 58307 is not found' 00:11:50.285 14:37:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:50.285 ************************************ 00:11:50.285 END TEST cpu_locks 00:11:50.285 ************************************ 00:11:50.285 00:11:50.285 real 0m15.360s 00:11:50.285 user 0m27.823s 00:11:50.285 sys 0m3.392s 00:11:50.285 14:37:59 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:50.285 14:37:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:50.285 ************************************ 00:11:50.285 END TEST event 00:11:50.285 ************************************ 00:11:50.285 00:11:50.285 real 0m37.529s 00:11:50.285 user 1m13.298s 00:11:50.285 sys 0m5.955s 00:11:50.285 14:37:59 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:50.285 14:37:59 event -- common/autotest_common.sh@10 -- # set +x 00:11:50.285 14:37:59 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:50.285 14:37:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:50.285 14:37:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:50.285 14:37:59 -- common/autotest_common.sh@10 -- # set +x 00:11:50.285 ************************************ 00:11:50.285 START TEST thread 00:11:50.285 ************************************ 00:11:50.285 14:37:59 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:50.285 * Looking for test storage... 00:11:50.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:50.285 14:37:59 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:50.285 14:37:59 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:50.285 14:37:59 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:11:50.285 14:37:59 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:50.285 14:37:59 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.285 14:37:59 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.285 14:37:59 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.285 14:37:59 thread -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.285 14:37:59 thread -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.285 14:37:59 thread -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.285 14:37:59 thread -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.285 14:37:59 thread -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.285 14:37:59 thread -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.285 14:37:59 thread -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.285 14:37:59 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.285 14:37:59 thread -- scripts/common.sh@344 -- # case "$op" in 00:11:50.285 14:37:59 thread -- scripts/common.sh@345 -- # : 1 00:11:50.285 14:37:59 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.285 14:37:59 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.285 14:37:59 thread -- scripts/common.sh@365 -- # decimal 1 00:11:50.285 14:37:59 thread -- scripts/common.sh@353 -- # local d=1 00:11:50.285 14:37:59 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.285 14:37:59 thread -- scripts/common.sh@355 -- # echo 1 00:11:50.285 14:37:59 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.285 14:37:59 thread -- scripts/common.sh@366 -- # decimal 2 00:11:50.285 14:37:59 thread -- scripts/common.sh@353 -- # local d=2 00:11:50.285 14:37:59 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.285 14:37:59 thread -- scripts/common.sh@355 -- # echo 2 00:11:50.285 14:37:59 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.285 14:37:59 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.285 14:37:59 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.285 14:37:59 thread -- scripts/common.sh@368 -- # return 0 00:11:50.285 14:37:59 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.285 14:37:59 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:50.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.285 --rc genhtml_branch_coverage=1 00:11:50.285 --rc genhtml_function_coverage=1 00:11:50.285 --rc genhtml_legend=1 00:11:50.285 --rc geninfo_all_blocks=1 00:11:50.285 --rc geninfo_unexecuted_blocks=1 00:11:50.285 00:11:50.285 ' 00:11:50.285 14:37:59 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:50.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.285 --rc genhtml_branch_coverage=1 00:11:50.285 --rc genhtml_function_coverage=1 00:11:50.285 --rc genhtml_legend=1 00:11:50.285 --rc geninfo_all_blocks=1 00:11:50.285 --rc geninfo_unexecuted_blocks=1 00:11:50.285 00:11:50.285 ' 00:11:50.285 14:37:59 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:50.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.285 --rc genhtml_branch_coverage=1 00:11:50.285 --rc genhtml_function_coverage=1 00:11:50.285 --rc genhtml_legend=1 00:11:50.285 --rc geninfo_all_blocks=1 00:11:50.285 --rc geninfo_unexecuted_blocks=1 00:11:50.285 00:11:50.285 ' 00:11:50.285 14:37:59 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:50.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.285 --rc genhtml_branch_coverage=1 00:11:50.285 --rc genhtml_function_coverage=1 00:11:50.285 --rc genhtml_legend=1 00:11:50.285 --rc geninfo_all_blocks=1 00:11:50.285 --rc geninfo_unexecuted_blocks=1 00:11:50.285 00:11:50.285 ' 00:11:50.285 14:37:59 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:50.286 14:37:59 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:11:50.286 14:37:59 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:50.286 14:37:59 thread -- common/autotest_common.sh@10 -- # set +x 00:11:50.286 ************************************ 00:11:50.286 START TEST thread_poller_perf 00:11:50.286 ************************************ 00:11:50.286 14:37:59 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:50.286 [2024-11-04 14:37:59.402177] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:50.286 [2024-11-04 14:37:59.402337] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58432 ] 00:11:50.543 [2024-11-04 14:37:59.537520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.544 [2024-11-04 14:37:59.568176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.544 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:51.475 [2024-11-04T14:38:00.615Z] ====================================== 00:11:51.475 [2024-11-04T14:38:00.615Z] busy:2606106902 (cyc) 00:11:51.475 [2024-11-04T14:38:00.615Z] total_run_count: 395000 00:11:51.475 [2024-11-04T14:38:00.615Z] tsc_hz: 2600000000 (cyc) 00:11:51.475 [2024-11-04T14:38:00.615Z] ====================================== 00:11:51.475 [2024-11-04T14:38:00.615Z] poller_cost: 6597 (cyc), 2537 (nsec) 00:11:51.475 00:11:51.475 real 0m1.214s 00:11:51.475 user 0m1.080s 00:11:51.475 sys 0m0.028s 00:11:51.475 14:38:00 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:51.475 ************************************ 00:11:51.475 14:38:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:51.475 END TEST thread_poller_perf 00:11:51.475 ************************************ 00:11:51.733 14:38:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:51.733 14:38:00 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:11:51.733 14:38:00 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:51.733 14:38:00 thread -- common/autotest_common.sh@10 -- # set +x 00:11:51.733 ************************************ 00:11:51.733 START TEST thread_poller_perf 00:11:51.733 ************************************ 00:11:51.733 14:38:00 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:51.733 [2024-11-04 14:38:00.654086] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:51.733 [2024-11-04 14:38:00.654172] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58473 ] 00:11:51.733 [2024-11-04 14:38:00.783974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.733 [2024-11-04 14:38:00.814345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.733 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:53.105 [2024-11-04T14:38:02.245Z] ====================================== 00:11:53.105 [2024-11-04T14:38:02.245Z] busy:2601955122 (cyc) 00:11:53.105 [2024-11-04T14:38:02.245Z] total_run_count: 5362000 00:11:53.105 [2024-11-04T14:38:02.245Z] tsc_hz: 2600000000 (cyc) 00:11:53.105 [2024-11-04T14:38:02.245Z] ====================================== 00:11:53.105 [2024-11-04T14:38:02.245Z] poller_cost: 485 (cyc), 186 (nsec) 00:11:53.105 00:11:53.105 real 0m1.203s 00:11:53.105 user 0m1.074s 00:11:53.105 sys 0m0.024s 00:11:53.105 14:38:01 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:53.105 14:38:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:53.105 ************************************ 00:11:53.105 END TEST thread_poller_perf 00:11:53.105 ************************************ 00:11:53.105 14:38:01 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:11:53.105 00:11:53.105 real 0m2.636s 00:11:53.105 user 0m2.267s 00:11:53.105 sys 0m0.163s 00:11:53.105 ************************************ 00:11:53.105 END TEST thread 00:11:53.105 ************************************ 00:11:53.105 14:38:01 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:53.105 14:38:01 thread -- common/autotest_common.sh@10 -- # set +x 00:11:53.105 14:38:01 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:11:53.105 14:38:01 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:53.105 14:38:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:53.105 14:38:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:53.105 14:38:01 -- common/autotest_common.sh@10 -- # set +x 00:11:53.105 ************************************ 00:11:53.105 START TEST app_cmdline 00:11:53.105 ************************************ 00:11:53.105 14:38:01 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:53.105 * Looking for test storage... 00:11:53.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:53.105 14:38:01 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:53.105 14:38:01 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:11:53.105 14:38:01 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:53.105 14:38:02 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@345 -- # : 1 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:11:53.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.105 14:38:02 app_cmdline -- scripts/common.sh@368 -- # return 0 00:11:53.105 14:38:02 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.105 14:38:02 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:53.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.105 --rc genhtml_branch_coverage=1 00:11:53.105 --rc genhtml_function_coverage=1 00:11:53.105 --rc genhtml_legend=1 00:11:53.105 --rc geninfo_all_blocks=1 00:11:53.105 --rc geninfo_unexecuted_blocks=1 00:11:53.105 00:11:53.105 ' 00:11:53.105 14:38:02 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:53.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.105 --rc genhtml_branch_coverage=1 00:11:53.105 --rc genhtml_function_coverage=1 00:11:53.105 --rc genhtml_legend=1 00:11:53.105 --rc geninfo_all_blocks=1 00:11:53.105 --rc geninfo_unexecuted_blocks=1 00:11:53.105 00:11:53.105 ' 00:11:53.105 14:38:02 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:53.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.105 --rc genhtml_branch_coverage=1 00:11:53.105 --rc genhtml_function_coverage=1 00:11:53.105 --rc genhtml_legend=1 00:11:53.105 --rc geninfo_all_blocks=1 00:11:53.105 --rc geninfo_unexecuted_blocks=1 00:11:53.105 00:11:53.105 ' 00:11:53.105 14:38:02 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:53.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.105 --rc genhtml_branch_coverage=1 00:11:53.105 --rc genhtml_function_coverage=1 00:11:53.105 --rc genhtml_legend=1 00:11:53.105 --rc geninfo_all_blocks=1 00:11:53.105 --rc geninfo_unexecuted_blocks=1 00:11:53.105 00:11:53.105 ' 00:11:53.105 14:38:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:53.105 14:38:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=58550 00:11:53.105 14:38:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 58550 00:11:53.105 14:38:02 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:53.105 14:38:02 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 58550 ']' 00:11:53.105 14:38:02 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.105 14:38:02 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:53.105 14:38:02 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.105 14:38:02 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:53.105 14:38:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:53.105 [2024-11-04 14:38:02.090381] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:53.105 [2024-11-04 14:38:02.090570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58550 ] 00:11:53.105 [2024-11-04 14:38:02.225664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.363 [2024-11-04 14:38:02.257295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.363 [2024-11-04 14:38:02.299125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:53.964 14:38:02 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:53.964 14:38:02 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:11:53.964 14:38:02 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:54.221 { 00:11:54.221 "version": "SPDK v25.01-pre git sha1 6e713f9c6", 00:11:54.221 "fields": { 00:11:54.221 "major": 25, 00:11:54.221 "minor": 1, 00:11:54.221 "patch": 0, 00:11:54.221 "suffix": "-pre", 00:11:54.221 "commit": "6e713f9c6" 00:11:54.221 } 00:11:54.221 } 00:11:54.221 14:38:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:54.221 14:38:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:54.221 14:38:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:54.221 14:38:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:54.221 14:38:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:54.221 14:38:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:54.221 14:38:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:54.222 14:38:03 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.222 14:38:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:54.222 14:38:03 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.222 14:38:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:54.222 14:38:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:54.222 14:38:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:54.222 14:38:03 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:11:54.222 14:38:03 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:54.222 14:38:03 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:54.222 14:38:03 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:54.222 14:38:03 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:54.222 14:38:03 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:54.222 14:38:03 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:54.222 14:38:03 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:54.222 14:38:03 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:54.222 14:38:03 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:54.222 14:38:03 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:54.480 request: 00:11:54.480 { 00:11:54.480 "method": "env_dpdk_get_mem_stats", 00:11:54.480 "req_id": 1 00:11:54.480 } 00:11:54.480 Got JSON-RPC error response 00:11:54.480 response: 00:11:54.480 { 00:11:54.480 "code": -32601, 00:11:54.480 "message": "Method not found" 00:11:54.480 } 00:11:54.480 14:38:03 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:11:54.480 14:38:03 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:54.480 14:38:03 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:54.480 14:38:03 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:54.480 14:38:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 58550 00:11:54.480 14:38:03 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 58550 ']' 00:11:54.480 14:38:03 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 58550 00:11:54.480 14:38:03 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:11:54.480 14:38:03 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:54.480 14:38:03 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58550 00:11:54.480 killing process with pid 58550 00:11:54.480 14:38:03 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:54.480 14:38:03 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:54.480 14:38:03 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58550' 00:11:54.480 14:38:03 app_cmdline -- common/autotest_common.sh@971 -- # kill 58550 00:11:54.480 14:38:03 app_cmdline -- common/autotest_common.sh@976 -- # wait 58550 00:11:54.738 00:11:54.738 real 0m1.716s 00:11:54.738 user 0m2.142s 00:11:54.738 sys 0m0.330s 00:11:54.738 ************************************ 00:11:54.738 END TEST app_cmdline 00:11:54.738 ************************************ 00:11:54.738 14:38:03 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:54.738 14:38:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:54.738 14:38:03 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:54.738 14:38:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:54.738 14:38:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:54.738 14:38:03 -- common/autotest_common.sh@10 -- # set +x 00:11:54.738 ************************************ 00:11:54.738 START TEST version 00:11:54.738 ************************************ 00:11:54.738 14:38:03 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:54.738 * Looking for test storage... 00:11:54.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:54.738 14:38:03 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:54.738 14:38:03 version -- common/autotest_common.sh@1691 -- # lcov --version 00:11:54.738 14:38:03 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:54.738 14:38:03 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:54.738 14:38:03 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:54.738 14:38:03 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:54.738 14:38:03 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:54.738 14:38:03 version -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.738 14:38:03 version -- scripts/common.sh@336 -- # read -ra ver1 00:11:54.738 14:38:03 version -- scripts/common.sh@337 -- # IFS=.-: 00:11:54.738 14:38:03 version -- scripts/common.sh@337 -- # read -ra ver2 00:11:54.738 14:38:03 version -- scripts/common.sh@338 -- # local 'op=<' 00:11:54.738 14:38:03 version -- scripts/common.sh@340 -- # ver1_l=2 00:11:54.738 14:38:03 version -- scripts/common.sh@341 -- # ver2_l=1 00:11:54.738 14:38:03 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:54.738 14:38:03 version -- scripts/common.sh@344 -- # case "$op" in 00:11:54.738 14:38:03 version -- scripts/common.sh@345 -- # : 1 00:11:54.738 14:38:03 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:54.738 14:38:03 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.738 14:38:03 version -- scripts/common.sh@365 -- # decimal 1 00:11:54.738 14:38:03 version -- scripts/common.sh@353 -- # local d=1 00:11:54.738 14:38:03 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.738 14:38:03 version -- scripts/common.sh@355 -- # echo 1 00:11:54.738 14:38:03 version -- scripts/common.sh@365 -- # ver1[v]=1 00:11:54.738 14:38:03 version -- scripts/common.sh@366 -- # decimal 2 00:11:54.738 14:38:03 version -- scripts/common.sh@353 -- # local d=2 00:11:54.738 14:38:03 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.738 14:38:03 version -- scripts/common.sh@355 -- # echo 2 00:11:54.738 14:38:03 version -- scripts/common.sh@366 -- # ver2[v]=2 00:11:54.738 14:38:03 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:54.738 14:38:03 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:54.738 14:38:03 version -- scripts/common.sh@368 -- # return 0 00:11:54.738 14:38:03 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.738 14:38:03 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:54.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.738 --rc genhtml_branch_coverage=1 00:11:54.738 --rc genhtml_function_coverage=1 00:11:54.738 --rc genhtml_legend=1 00:11:54.738 --rc geninfo_all_blocks=1 00:11:54.738 --rc geninfo_unexecuted_blocks=1 00:11:54.738 00:11:54.738 ' 00:11:54.738 14:38:03 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:54.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.738 --rc genhtml_branch_coverage=1 00:11:54.738 --rc genhtml_function_coverage=1 00:11:54.738 --rc genhtml_legend=1 00:11:54.738 --rc geninfo_all_blocks=1 00:11:54.738 --rc geninfo_unexecuted_blocks=1 00:11:54.738 00:11:54.738 ' 00:11:54.738 14:38:03 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:54.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.738 --rc genhtml_branch_coverage=1 00:11:54.738 --rc genhtml_function_coverage=1 00:11:54.738 --rc genhtml_legend=1 00:11:54.738 --rc geninfo_all_blocks=1 00:11:54.738 --rc geninfo_unexecuted_blocks=1 00:11:54.738 00:11:54.738 ' 00:11:54.738 14:38:03 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:54.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.738 --rc genhtml_branch_coverage=1 00:11:54.738 --rc genhtml_function_coverage=1 00:11:54.738 --rc genhtml_legend=1 00:11:54.738 --rc geninfo_all_blocks=1 00:11:54.738 --rc geninfo_unexecuted_blocks=1 00:11:54.738 00:11:54.738 ' 00:11:54.738 14:38:03 version -- app/version.sh@17 -- # get_header_version major 00:11:54.738 14:38:03 version -- app/version.sh@14 -- # cut -f2 00:11:54.738 14:38:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:54.738 14:38:03 version -- app/version.sh@14 -- # tr -d '"' 00:11:54.738 14:38:03 version -- app/version.sh@17 -- # major=25 00:11:54.738 14:38:03 version -- app/version.sh@18 -- # get_header_version minor 00:11:54.738 14:38:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:54.738 14:38:03 version -- app/version.sh@14 -- # cut -f2 00:11:54.738 14:38:03 version -- app/version.sh@14 -- # tr -d '"' 00:11:54.738 14:38:03 version -- app/version.sh@18 -- # minor=1 00:11:54.738 14:38:03 version -- app/version.sh@19 -- # get_header_version patch 00:11:54.738 14:38:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:54.738 14:38:03 version -- app/version.sh@14 -- # tr -d '"' 00:11:54.738 14:38:03 version -- app/version.sh@14 -- # cut -f2 00:11:54.738 14:38:03 version -- app/version.sh@19 -- # patch=0 00:11:54.738 14:38:03 version -- app/version.sh@20 -- # get_header_version suffix 00:11:54.738 14:38:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:54.738 14:38:03 version -- app/version.sh@14 -- # cut -f2 00:11:54.738 14:38:03 version -- app/version.sh@14 -- # tr -d '"' 00:11:54.738 14:38:03 version -- app/version.sh@20 -- # suffix=-pre 00:11:54.738 14:38:03 version -- app/version.sh@22 -- # version=25.1 00:11:54.738 14:38:03 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:54.738 14:38:03 version -- app/version.sh@28 -- # version=25.1rc0 00:11:54.738 14:38:03 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:54.738 14:38:03 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:54.738 14:38:03 version -- app/version.sh@30 -- # py_version=25.1rc0 00:11:54.738 14:38:03 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:11:54.738 00:11:54.738 real 0m0.189s 00:11:54.738 user 0m0.120s 00:11:54.738 sys 0m0.099s 00:11:54.738 ************************************ 00:11:54.738 END TEST version 00:11:54.738 ************************************ 00:11:54.738 14:38:03 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:54.738 14:38:03 version -- common/autotest_common.sh@10 -- # set +x 00:11:54.997 14:38:03 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:11:54.997 14:38:03 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:11:54.997 14:38:03 -- spdk/autotest.sh@194 -- # uname -s 00:11:54.997 14:38:03 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:11:54.997 14:38:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:54.997 14:38:03 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:11:54.997 14:38:03 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:11:54.997 14:38:03 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:11:54.997 14:38:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:54.997 14:38:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:54.997 14:38:03 -- common/autotest_common.sh@10 -- # set +x 00:11:54.997 ************************************ 00:11:54.997 START TEST spdk_dd 00:11:54.997 ************************************ 00:11:54.997 14:38:03 spdk_dd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:11:54.997 * Looking for test storage... 00:11:54.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:54.997 14:38:03 spdk_dd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:54.997 14:38:03 spdk_dd -- common/autotest_common.sh@1691 -- # lcov --version 00:11:54.997 14:38:03 spdk_dd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:54.997 14:38:04 spdk_dd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@345 -- # : 1 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@368 -- # return 0 00:11:54.997 14:38:04 spdk_dd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.997 14:38:04 spdk_dd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:54.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.997 --rc genhtml_branch_coverage=1 00:11:54.997 --rc genhtml_function_coverage=1 00:11:54.997 --rc genhtml_legend=1 00:11:54.997 --rc geninfo_all_blocks=1 00:11:54.997 --rc geninfo_unexecuted_blocks=1 00:11:54.997 00:11:54.997 ' 00:11:54.997 14:38:04 spdk_dd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:54.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.997 --rc genhtml_branch_coverage=1 00:11:54.997 --rc genhtml_function_coverage=1 00:11:54.997 --rc genhtml_legend=1 00:11:54.997 --rc geninfo_all_blocks=1 00:11:54.997 --rc geninfo_unexecuted_blocks=1 00:11:54.997 00:11:54.997 ' 00:11:54.997 14:38:04 spdk_dd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:54.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.997 --rc genhtml_branch_coverage=1 00:11:54.997 --rc genhtml_function_coverage=1 00:11:54.997 --rc genhtml_legend=1 00:11:54.997 --rc geninfo_all_blocks=1 00:11:54.997 --rc geninfo_unexecuted_blocks=1 00:11:54.997 00:11:54.997 ' 00:11:54.997 14:38:04 spdk_dd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:54.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.997 --rc genhtml_branch_coverage=1 00:11:54.997 --rc genhtml_function_coverage=1 00:11:54.997 --rc genhtml_legend=1 00:11:54.997 --rc geninfo_all_blocks=1 00:11:54.997 --rc geninfo_unexecuted_blocks=1 00:11:54.997 00:11:54.997 ' 00:11:54.997 14:38:04 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.997 14:38:04 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.997 14:38:04 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.997 14:38:04 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.997 14:38:04 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.997 14:38:04 spdk_dd -- paths/export.sh@5 -- # export PATH 00:11:54.997 14:38:04 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.997 14:38:04 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:55.257 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:55.257 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:55.257 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:55.257 14:38:04 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:11:55.257 14:38:04 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@233 -- # local class 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@235 -- # local progif 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@236 -- # class=01 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@18 -- # local i 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@27 -- # return 0 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@18 -- # local i 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@27 -- # return 0 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:11:55.257 14:38:04 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:11:55.257 14:38:04 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@139 -- # local lib 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.257 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:11:55.258 * spdk_dd linked to liburing 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:55.258 14:38:04 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:55.258 14:38:04 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:55.259 14:38:04 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:11:55.259 14:38:04 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:11:55.259 14:38:04 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:11:55.259 14:38:04 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:11:55.259 14:38:04 spdk_dd -- dd/common.sh@153 -- # return 0 00:11:55.259 14:38:04 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:11:55.259 14:38:04 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:11:55.259 14:38:04 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:55.259 14:38:04 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:55.259 14:38:04 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:55.537 ************************************ 00:11:55.537 START TEST spdk_dd_basic_rw 00:11:55.537 ************************************ 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:11:55.537 * Looking for test storage... 00:11:55.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lcov --version 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:11:55.537 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:55.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.538 --rc genhtml_branch_coverage=1 00:11:55.538 --rc genhtml_function_coverage=1 00:11:55.538 --rc genhtml_legend=1 00:11:55.538 --rc geninfo_all_blocks=1 00:11:55.538 --rc geninfo_unexecuted_blocks=1 00:11:55.538 00:11:55.538 ' 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:55.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.538 --rc genhtml_branch_coverage=1 00:11:55.538 --rc genhtml_function_coverage=1 00:11:55.538 --rc genhtml_legend=1 00:11:55.538 --rc geninfo_all_blocks=1 00:11:55.538 --rc geninfo_unexecuted_blocks=1 00:11:55.538 00:11:55.538 ' 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:55.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.538 --rc genhtml_branch_coverage=1 00:11:55.538 --rc genhtml_function_coverage=1 00:11:55.538 --rc genhtml_legend=1 00:11:55.538 --rc geninfo_all_blocks=1 00:11:55.538 --rc geninfo_unexecuted_blocks=1 00:11:55.538 00:11:55.538 ' 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:55.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.538 --rc genhtml_branch_coverage=1 00:11:55.538 --rc genhtml_function_coverage=1 00:11:55.538 --rc genhtml_legend=1 00:11:55.538 --rc geninfo_all_blocks=1 00:11:55.538 --rc geninfo_unexecuted_blocks=1 00:11:55.538 00:11:55.538 ' 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:11:55.538 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:11:55.799 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:11:55.799 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:11:55.800 ************************************ 00:11:55.800 START TEST dd_bs_lt_native_bs 00:11:55.800 ************************************ 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1127 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:55.800 14:38:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:55.800 { 00:11:55.800 "subsystems": [ 00:11:55.800 { 00:11:55.800 "subsystem": "bdev", 00:11:55.800 "config": [ 00:11:55.800 { 00:11:55.800 "params": { 00:11:55.800 "trtype": "pcie", 00:11:55.800 "traddr": "0000:00:10.0", 00:11:55.800 "name": "Nvme0" 00:11:55.800 }, 00:11:55.800 "method": "bdev_nvme_attach_controller" 00:11:55.800 }, 00:11:55.800 { 00:11:55.800 "method": "bdev_wait_for_examine" 00:11:55.800 } 00:11:55.800 ] 00:11:55.800 } 00:11:55.800 ] 00:11:55.800 } 00:11:55.800 [2024-11-04 14:38:04.782299] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:55.800 [2024-11-04 14:38:04.782368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58896 ] 00:11:55.800 [2024-11-04 14:38:04.916205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.059 [2024-11-04 14:38:04.957942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.059 [2024-11-04 14:38:04.986193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:56.059 [2024-11-04 14:38:05.077158] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:11:56.059 [2024-11-04 14:38:05.077201] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:56.059 [2024-11-04 14:38:05.136904] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:56.059 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:11:56.059 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:56.059 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:11:56.059 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:11:56.059 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:11:56.059 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:56.059 00:11:56.059 real 0m0.423s 00:11:56.059 user 0m0.269s 00:11:56.059 sys 0m0.089s 00:11:56.059 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:56.059 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:11:56.059 ************************************ 00:11:56.059 END TEST dd_bs_lt_native_bs 00:11:56.059 ************************************ 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:11:56.316 ************************************ 00:11:56.316 START TEST dd_rw 00:11:56.316 ************************************ 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1127 -- # basic_rw 4096 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:11:56.316 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:56.576 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:11:56.576 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:11:56.576 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:56.576 14:38:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:56.576 { 00:11:56.576 "subsystems": [ 00:11:56.576 { 00:11:56.576 "subsystem": "bdev", 00:11:56.576 "config": [ 00:11:56.576 { 00:11:56.576 "params": { 00:11:56.576 "trtype": "pcie", 00:11:56.576 "traddr": "0000:00:10.0", 00:11:56.576 "name": "Nvme0" 00:11:56.576 }, 00:11:56.576 "method": "bdev_nvme_attach_controller" 00:11:56.576 }, 00:11:56.576 { 00:11:56.576 "method": "bdev_wait_for_examine" 00:11:56.576 } 00:11:56.576 ] 00:11:56.576 } 00:11:56.576 ] 00:11:56.576 } 00:11:56.576 [2024-11-04 14:38:05.708091] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:56.576 [2024-11-04 14:38:05.708177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58926 ] 00:11:56.833 [2024-11-04 14:38:05.854356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.833 [2024-11-04 14:38:05.890148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.834 [2024-11-04 14:38:05.921483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:57.101  [2024-11-04T14:38:06.241Z] Copying: 60/60 [kB] (average 29 MBps) 00:11:57.101 00:11:57.101 14:38:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:11:57.101 14:38:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:11:57.101 14:38:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:57.101 14:38:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:57.101 { 00:11:57.101 "subsystems": [ 00:11:57.101 { 00:11:57.101 "subsystem": "bdev", 00:11:57.101 "config": [ 00:11:57.101 { 00:11:57.101 "params": { 00:11:57.101 "trtype": "pcie", 00:11:57.101 "traddr": "0000:00:10.0", 00:11:57.101 "name": "Nvme0" 00:11:57.101 }, 00:11:57.101 "method": "bdev_nvme_attach_controller" 00:11:57.101 }, 00:11:57.101 { 00:11:57.101 "method": "bdev_wait_for_examine" 00:11:57.101 } 00:11:57.101 ] 00:11:57.101 } 00:11:57.101 ] 00:11:57.101 } 00:11:57.101 [2024-11-04 14:38:06.166150] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:57.101 [2024-11-04 14:38:06.166216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58935 ] 00:11:57.374 [2024-11-04 14:38:06.305500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.374 [2024-11-04 14:38:06.347376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.374 [2024-11-04 14:38:06.383042] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:57.374  [2024-11-04T14:38:06.772Z] Copying: 60/60 [kB] (average 19 MBps) 00:11:57.632 00:11:57.632 14:38:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:57.632 14:38:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:11:57.632 14:38:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:57.632 14:38:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:11:57.632 14:38:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:11:57.632 14:38:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:11:57.632 14:38:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:11:57.632 14:38:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:57.632 14:38:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:11:57.632 14:38:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:57.632 14:38:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:57.632 { 00:11:57.632 "subsystems": [ 00:11:57.632 { 00:11:57.632 "subsystem": "bdev", 00:11:57.632 "config": [ 00:11:57.632 { 00:11:57.632 "params": { 00:11:57.632 "trtype": "pcie", 00:11:57.632 "traddr": "0000:00:10.0", 00:11:57.632 "name": "Nvme0" 00:11:57.632 }, 00:11:57.632 "method": "bdev_nvme_attach_controller" 00:11:57.632 }, 00:11:57.632 { 00:11:57.632 "method": "bdev_wait_for_examine" 00:11:57.632 } 00:11:57.632 ] 00:11:57.632 } 00:11:57.632 ] 00:11:57.632 } 00:11:57.632 [2024-11-04 14:38:06.637970] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:57.632 [2024-11-04 14:38:06.638034] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58951 ] 00:11:57.889 [2024-11-04 14:38:06.776210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.889 [2024-11-04 14:38:06.817365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.889 [2024-11-04 14:38:06.851571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:57.889  [2024-11-04T14:38:07.287Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:58.147 00:11:58.147 14:38:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:58.147 14:38:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:11:58.147 14:38:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:11:58.147 14:38:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:11:58.147 14:38:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:11:58.147 14:38:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:11:58.147 14:38:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:58.720 14:38:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:11:58.720 14:38:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:11:58.720 14:38:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:58.720 14:38:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:58.720 { 00:11:58.720 "subsystems": [ 00:11:58.720 { 00:11:58.720 "subsystem": "bdev", 00:11:58.721 "config": [ 00:11:58.721 { 00:11:58.721 "params": { 00:11:58.721 "trtype": "pcie", 00:11:58.721 "traddr": "0000:00:10.0", 00:11:58.721 "name": "Nvme0" 00:11:58.721 }, 00:11:58.721 "method": "bdev_nvme_attach_controller" 00:11:58.721 }, 00:11:58.721 { 00:11:58.721 "method": "bdev_wait_for_examine" 00:11:58.721 } 00:11:58.721 ] 00:11:58.721 } 00:11:58.721 ] 00:11:58.721 } 00:11:58.721 [2024-11-04 14:38:07.611496] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:58.721 [2024-11-04 14:38:07.611554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58970 ] 00:11:58.721 [2024-11-04 14:38:07.750208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.721 [2024-11-04 14:38:07.784690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.721 [2024-11-04 14:38:07.816056] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:58.978  [2024-11-04T14:38:08.118Z] Copying: 60/60 [kB] (average 58 MBps) 00:11:58.978 00:11:58.978 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:11:58.978 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:11:58.978 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:58.978 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:58.978 [2024-11-04 14:38:08.062845] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:58.978 [2024-11-04 14:38:08.062909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58983 ] 00:11:58.978 { 00:11:58.978 "subsystems": [ 00:11:58.978 { 00:11:58.978 "subsystem": "bdev", 00:11:58.978 "config": [ 00:11:58.978 { 00:11:58.978 "params": { 00:11:58.978 "trtype": "pcie", 00:11:58.978 "traddr": "0000:00:10.0", 00:11:58.978 "name": "Nvme0" 00:11:58.978 }, 00:11:58.978 "method": "bdev_nvme_attach_controller" 00:11:58.978 }, 00:11:58.978 { 00:11:58.978 "method": "bdev_wait_for_examine" 00:11:58.978 } 00:11:58.978 ] 00:11:58.978 } 00:11:58.978 ] 00:11:58.978 } 00:11:59.236 [2024-11-04 14:38:08.201294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.236 [2024-11-04 14:38:08.235836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.236 [2024-11-04 14:38:08.267792] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:59.236  [2024-11-04T14:38:08.634Z] Copying: 60/60 [kB] (average 58 MBps) 00:11:59.494 00:11:59.494 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:59.494 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:11:59.494 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:59.494 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:11:59.494 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:11:59.494 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:11:59.494 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:11:59.494 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:59.494 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:11:59.494 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:59.494 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:59.494 [2024-11-04 14:38:08.505474] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:11:59.494 [2024-11-04 14:38:08.505544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58993 ] 00:11:59.494 { 00:11:59.494 "subsystems": [ 00:11:59.494 { 00:11:59.494 "subsystem": "bdev", 00:11:59.494 "config": [ 00:11:59.494 { 00:11:59.494 "params": { 00:11:59.494 "trtype": "pcie", 00:11:59.494 "traddr": "0000:00:10.0", 00:11:59.494 "name": "Nvme0" 00:11:59.494 }, 00:11:59.494 "method": "bdev_nvme_attach_controller" 00:11:59.494 }, 00:11:59.494 { 00:11:59.494 "method": "bdev_wait_for_examine" 00:11:59.494 } 00:11:59.494 ] 00:11:59.494 } 00:11:59.494 ] 00:11:59.494 } 00:11:59.758 [2024-11-04 14:38:08.641666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.758 [2024-11-04 14:38:08.677922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.758 [2024-11-04 14:38:08.709909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:59.758  [2024-11-04T14:38:09.168Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:12:00.028 00:12:00.028 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:12:00.028 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:12:00.028 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:12:00.028 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:12:00.028 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:12:00.028 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:12:00.028 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:12:00.028 14:38:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:12:00.285 14:38:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:12:00.285 14:38:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:12:00.285 14:38:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:12:00.285 14:38:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:12:00.285 [2024-11-04 14:38:09.368151] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:00.285 [2024-11-04 14:38:09.368209] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59012 ] 00:12:00.285 { 00:12:00.285 "subsystems": [ 00:12:00.285 { 00:12:00.285 "subsystem": "bdev", 00:12:00.285 "config": [ 00:12:00.285 { 00:12:00.285 "params": { 00:12:00.285 "trtype": "pcie", 00:12:00.285 "traddr": "0000:00:10.0", 00:12:00.285 "name": "Nvme0" 00:12:00.285 }, 00:12:00.285 "method": "bdev_nvme_attach_controller" 00:12:00.285 }, 00:12:00.285 { 00:12:00.285 "method": "bdev_wait_for_examine" 00:12:00.285 } 00:12:00.285 ] 00:12:00.285 } 00:12:00.285 ] 00:12:00.285 } 00:12:00.543 [2024-11-04 14:38:09.504888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.543 [2024-11-04 14:38:09.543965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.543 [2024-11-04 14:38:09.577815] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:00.543  [2024-11-04T14:38:09.941Z] Copying: 56/56 [kB] (average 54 MBps) 00:12:00.801 00:12:00.801 14:38:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:12:00.801 14:38:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:12:00.801 14:38:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:12:00.801 14:38:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:12:00.801 [2024-11-04 14:38:09.822486] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:00.801 [2024-11-04 14:38:09.822550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59030 ] 00:12:00.801 { 00:12:00.801 "subsystems": [ 00:12:00.801 { 00:12:00.801 "subsystem": "bdev", 00:12:00.801 "config": [ 00:12:00.801 { 00:12:00.801 "params": { 00:12:00.801 "trtype": "pcie", 00:12:00.801 "traddr": "0000:00:10.0", 00:12:00.801 "name": "Nvme0" 00:12:00.801 }, 00:12:00.801 "method": "bdev_nvme_attach_controller" 00:12:00.801 }, 00:12:00.801 { 00:12:00.801 "method": "bdev_wait_for_examine" 00:12:00.801 } 00:12:00.801 ] 00:12:00.801 } 00:12:00.801 ] 00:12:00.801 } 00:12:01.058 [2024-11-04 14:38:09.960418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.058 [2024-11-04 14:38:10.000042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.058 [2024-11-04 14:38:10.037019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:01.058  [2024-11-04T14:38:10.457Z] Copying: 56/56 [kB] (average 54 MBps) 00:12:01.317 00:12:01.317 14:38:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:01.317 14:38:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:12:01.317 14:38:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:12:01.317 14:38:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:12:01.317 14:38:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:12:01.317 14:38:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:12:01.317 14:38:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:12:01.317 14:38:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:12:01.317 14:38:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:12:01.317 14:38:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:12:01.317 14:38:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:12:01.317 [2024-11-04 14:38:10.290600] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:01.317 [2024-11-04 14:38:10.290676] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59041 ] 00:12:01.317 { 00:12:01.317 "subsystems": [ 00:12:01.317 { 00:12:01.317 "subsystem": "bdev", 00:12:01.317 "config": [ 00:12:01.317 { 00:12:01.317 "params": { 00:12:01.317 "trtype": "pcie", 00:12:01.317 "traddr": "0000:00:10.0", 00:12:01.317 "name": "Nvme0" 00:12:01.317 }, 00:12:01.317 "method": "bdev_nvme_attach_controller" 00:12:01.317 }, 00:12:01.317 { 00:12:01.317 "method": "bdev_wait_for_examine" 00:12:01.317 } 00:12:01.317 ] 00:12:01.317 } 00:12:01.317 ] 00:12:01.317 } 00:12:01.317 [2024-11-04 14:38:10.429675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.574 [2024-11-04 14:38:10.469286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.574 [2024-11-04 14:38:10.503426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:01.574  [2024-11-04T14:38:10.972Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:12:01.832 00:12:01.832 14:38:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:12:01.832 14:38:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:12:01.832 14:38:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:12:01.832 14:38:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:12:01.832 14:38:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:12:01.832 14:38:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:12:01.833 14:38:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:12:02.090 14:38:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:12:02.091 14:38:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:12:02.091 14:38:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:12:02.091 14:38:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:12:02.091 [2024-11-04 14:38:11.149260] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:02.091 [2024-11-04 14:38:11.149353] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59060 ] 00:12:02.091 { 00:12:02.091 "subsystems": [ 00:12:02.091 { 00:12:02.091 "subsystem": "bdev", 00:12:02.091 "config": [ 00:12:02.091 { 00:12:02.091 "params": { 00:12:02.091 "trtype": "pcie", 00:12:02.091 "traddr": "0000:00:10.0", 00:12:02.091 "name": "Nvme0" 00:12:02.091 }, 00:12:02.091 "method": "bdev_nvme_attach_controller" 00:12:02.091 }, 00:12:02.091 { 00:12:02.091 "method": "bdev_wait_for_examine" 00:12:02.091 } 00:12:02.091 ] 00:12:02.091 } 00:12:02.091 ] 00:12:02.091 } 00:12:02.349 [2024-11-04 14:38:11.288617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.349 [2024-11-04 14:38:11.325073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.349 [2024-11-04 14:38:11.357009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:02.349  [2024-11-04T14:38:11.747Z] Copying: 56/56 [kB] (average 54 MBps) 00:12:02.607 00:12:02.607 14:38:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:12:02.607 14:38:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:12:02.607 14:38:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:12:02.607 14:38:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:12:02.607 [2024-11-04 14:38:11.602543] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:02.607 [2024-11-04 14:38:11.602631] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59068 ] 00:12:02.607 { 00:12:02.607 "subsystems": [ 00:12:02.607 { 00:12:02.607 "subsystem": "bdev", 00:12:02.607 "config": [ 00:12:02.607 { 00:12:02.607 "params": { 00:12:02.607 "trtype": "pcie", 00:12:02.607 "traddr": "0000:00:10.0", 00:12:02.607 "name": "Nvme0" 00:12:02.607 }, 00:12:02.607 "method": "bdev_nvme_attach_controller" 00:12:02.607 }, 00:12:02.607 { 00:12:02.607 "method": "bdev_wait_for_examine" 00:12:02.607 } 00:12:02.607 ] 00:12:02.607 } 00:12:02.607 ] 00:12:02.607 } 00:12:02.607 [2024-11-04 14:38:11.742987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.864 [2024-11-04 14:38:11.784208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.864 [2024-11-04 14:38:11.819515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:02.864  [2024-11-04T14:38:12.332Z] Copying: 56/56 [kB] (average 54 MBps) 00:12:03.192 00:12:03.192 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:03.192 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:12:03.192 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:12:03.192 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:12:03.192 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:12:03.192 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:12:03.192 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:12:03.192 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:12:03.192 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:12:03.192 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:12:03.192 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:12:03.192 [2024-11-04 14:38:12.070420] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:03.192 [2024-11-04 14:38:12.070496] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59089 ] 00:12:03.192 { 00:12:03.192 "subsystems": [ 00:12:03.192 { 00:12:03.192 "subsystem": "bdev", 00:12:03.192 "config": [ 00:12:03.192 { 00:12:03.192 "params": { 00:12:03.192 "trtype": "pcie", 00:12:03.192 "traddr": "0000:00:10.0", 00:12:03.192 "name": "Nvme0" 00:12:03.192 }, 00:12:03.192 "method": "bdev_nvme_attach_controller" 00:12:03.192 }, 00:12:03.192 { 00:12:03.192 "method": "bdev_wait_for_examine" 00:12:03.192 } 00:12:03.192 ] 00:12:03.192 } 00:12:03.192 ] 00:12:03.192 } 00:12:03.192 [2024-11-04 14:38:12.210852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.192 [2024-11-04 14:38:12.252576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.192 [2024-11-04 14:38:12.286900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:03.450  [2024-11-04T14:38:12.590Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:12:03.450 00:12:03.450 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:12:03.450 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:12:03.450 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:12:03.450 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:12:03.450 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:12:03.450 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:12:03.450 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:12:03.450 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:12:04.016 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:12:04.016 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:12:04.016 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:12:04.016 14:38:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:12:04.016 [2024-11-04 14:38:12.898348] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:04.016 [2024-11-04 14:38:12.898423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59107 ] 00:12:04.016 { 00:12:04.016 "subsystems": [ 00:12:04.016 { 00:12:04.016 "subsystem": "bdev", 00:12:04.016 "config": [ 00:12:04.016 { 00:12:04.016 "params": { 00:12:04.016 "trtype": "pcie", 00:12:04.016 "traddr": "0000:00:10.0", 00:12:04.016 "name": "Nvme0" 00:12:04.016 }, 00:12:04.016 "method": "bdev_nvme_attach_controller" 00:12:04.016 }, 00:12:04.016 { 00:12:04.016 "method": "bdev_wait_for_examine" 00:12:04.016 } 00:12:04.016 ] 00:12:04.016 } 00:12:04.016 ] 00:12:04.016 } 00:12:04.016 [2024-11-04 14:38:13.037418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.016 [2024-11-04 14:38:13.072676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.016 [2024-11-04 14:38:13.104351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:04.274  [2024-11-04T14:38:13.414Z] Copying: 48/48 [kB] (average 46 MBps) 00:12:04.274 00:12:04.274 14:38:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:12:04.274 14:38:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:12:04.274 14:38:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:12:04.274 14:38:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:12:04.274 [2024-11-04 14:38:13.336045] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:04.274 [2024-11-04 14:38:13.336117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59116 ] 00:12:04.274 { 00:12:04.274 "subsystems": [ 00:12:04.274 { 00:12:04.274 "subsystem": "bdev", 00:12:04.274 "config": [ 00:12:04.274 { 00:12:04.274 "params": { 00:12:04.274 "trtype": "pcie", 00:12:04.274 "traddr": "0000:00:10.0", 00:12:04.274 "name": "Nvme0" 00:12:04.274 }, 00:12:04.274 "method": "bdev_nvme_attach_controller" 00:12:04.274 }, 00:12:04.274 { 00:12:04.274 "method": "bdev_wait_for_examine" 00:12:04.274 } 00:12:04.274 ] 00:12:04.274 } 00:12:04.274 ] 00:12:04.274 } 00:12:04.531 [2024-11-04 14:38:13.473976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.532 [2024-11-04 14:38:13.510442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.532 [2024-11-04 14:38:13.543183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:04.532  [2024-11-04T14:38:13.929Z] Copying: 48/48 [kB] (average 46 MBps) 00:12:04.789 00:12:04.789 14:38:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:04.789 14:38:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:12:04.789 14:38:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:12:04.789 14:38:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:12:04.789 14:38:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:12:04.789 14:38:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:12:04.789 14:38:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:12:04.789 14:38:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:12:04.789 14:38:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:12:04.789 14:38:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:12:04.789 14:38:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:12:04.789 [2024-11-04 14:38:13.781212] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:04.789 [2024-11-04 14:38:13.781284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59132 ] 00:12:04.789 { 00:12:04.789 "subsystems": [ 00:12:04.789 { 00:12:04.789 "subsystem": "bdev", 00:12:04.789 "config": [ 00:12:04.789 { 00:12:04.789 "params": { 00:12:04.789 "trtype": "pcie", 00:12:04.789 "traddr": "0000:00:10.0", 00:12:04.789 "name": "Nvme0" 00:12:04.789 }, 00:12:04.789 "method": "bdev_nvme_attach_controller" 00:12:04.789 }, 00:12:04.789 { 00:12:04.789 "method": "bdev_wait_for_examine" 00:12:04.789 } 00:12:04.789 ] 00:12:04.789 } 00:12:04.789 ] 00:12:04.789 } 00:12:04.789 [2024-11-04 14:38:13.915399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.046 [2024-11-04 14:38:13.951447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.046 [2024-11-04 14:38:13.983366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:05.046  [2024-11-04T14:38:14.186Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:12:05.046 00:12:05.046 14:38:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:12:05.303 14:38:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:12:05.303 14:38:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:12:05.303 14:38:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:12:05.303 14:38:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:12:05.303 14:38:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:12:05.303 14:38:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:12:05.559 14:38:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:12:05.559 14:38:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:12:05.559 14:38:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:12:05.559 14:38:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:12:05.559 [2024-11-04 14:38:14.508690] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:05.559 [2024-11-04 14:38:14.508757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59145 ] 00:12:05.559 { 00:12:05.559 "subsystems": [ 00:12:05.559 { 00:12:05.559 "subsystem": "bdev", 00:12:05.559 "config": [ 00:12:05.559 { 00:12:05.559 "params": { 00:12:05.559 "trtype": "pcie", 00:12:05.559 "traddr": "0000:00:10.0", 00:12:05.559 "name": "Nvme0" 00:12:05.559 }, 00:12:05.559 "method": "bdev_nvme_attach_controller" 00:12:05.559 }, 00:12:05.559 { 00:12:05.559 "method": "bdev_wait_for_examine" 00:12:05.559 } 00:12:05.559 ] 00:12:05.559 } 00:12:05.559 ] 00:12:05.559 } 00:12:05.559 [2024-11-04 14:38:14.648485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.559 [2024-11-04 14:38:14.690488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.817 [2024-11-04 14:38:14.725139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:05.817  [2024-11-04T14:38:14.957Z] Copying: 48/48 [kB] (average 46 MBps) 00:12:05.817 00:12:05.817 14:38:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:12:05.817 14:38:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:12:05.817 14:38:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:12:05.817 14:38:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:12:06.074 [2024-11-04 14:38:14.967899] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:06.074 [2024-11-04 14:38:14.967969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59163 ] 00:12:06.074 { 00:12:06.074 "subsystems": [ 00:12:06.074 { 00:12:06.074 "subsystem": "bdev", 00:12:06.074 "config": [ 00:12:06.074 { 00:12:06.074 "params": { 00:12:06.074 "trtype": "pcie", 00:12:06.074 "traddr": "0000:00:10.0", 00:12:06.074 "name": "Nvme0" 00:12:06.074 }, 00:12:06.074 "method": "bdev_nvme_attach_controller" 00:12:06.074 }, 00:12:06.074 { 00:12:06.074 "method": "bdev_wait_for_examine" 00:12:06.074 } 00:12:06.074 ] 00:12:06.074 } 00:12:06.074 ] 00:12:06.074 } 00:12:06.074 [2024-11-04 14:38:15.109705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.074 [2024-11-04 14:38:15.148902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.074 [2024-11-04 14:38:15.183474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:06.332  [2024-11-04T14:38:15.472Z] Copying: 48/48 [kB] (average 46 MBps) 00:12:06.332 00:12:06.332 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:06.332 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:12:06.332 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:12:06.332 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:12:06.332 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:12:06.332 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:12:06.332 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:12:06.332 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:12:06.332 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:12:06.332 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:12:06.332 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:12:06.332 { 00:12:06.332 "subsystems": [ 00:12:06.332 { 00:12:06.332 "subsystem": "bdev", 00:12:06.332 "config": [ 00:12:06.332 { 00:12:06.332 "params": { 00:12:06.332 "trtype": "pcie", 00:12:06.332 "traddr": "0000:00:10.0", 00:12:06.332 "name": "Nvme0" 00:12:06.332 }, 00:12:06.332 "method": "bdev_nvme_attach_controller" 00:12:06.332 }, 00:12:06.332 { 00:12:06.332 "method": "bdev_wait_for_examine" 00:12:06.332 } 00:12:06.332 ] 00:12:06.332 } 00:12:06.332 ] 00:12:06.332 } 00:12:06.332 [2024-11-04 14:38:15.438879] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:06.332 [2024-11-04 14:38:15.438943] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59174 ] 00:12:06.590 [2024-11-04 14:38:15.577074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.590 [2024-11-04 14:38:15.620524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.590 [2024-11-04 14:38:15.655529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:06.847  [2024-11-04T14:38:15.987Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:12:06.847 00:12:06.847 00:12:06.847 real 0m10.656s 00:12:06.847 user 0m7.505s 00:12:06.847 sys 0m3.456s 00:12:06.847 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:06.847 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:12:06.847 ************************************ 00:12:06.847 END TEST dd_rw 00:12:06.847 ************************************ 00:12:06.847 14:38:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:12:06.848 14:38:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:06.848 14:38:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:06.848 14:38:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:12:06.848 ************************************ 00:12:06.848 START TEST dd_rw_offset 00:12:06.848 ************************************ 00:12:06.848 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1127 -- # basic_offset 00:12:06.848 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:12:06.848 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:12:06.848 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:12:06.848 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:12:06.848 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:12:06.848 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=78mbdutwao9ixxnx81ne67bic7zam0tyo57s89rlhukwbzkagdbtcsc1kkw3lrpi64xmh1wgbma2rm7i694hnbtpk9lubvdmct207aleszljo162a74bxrismcc2j9cmozhxw1ksfnozmr5kolsfbnbk26uij1tjzstvav6lgcxeau80jntx98j8xwnsbqvaf9eev0rht825h4jo89klgx2srj9rwwod7aaiziwetxolipw42mj96tr2tizy084bq0a20kk2b7m7jq966ynnqmcqjf5b29ojb4vpx36f3dz8hqi7et8m3apf9cv8gf91elyk3jbwn2ll602pllcxs88n179z5av74bz3kdtx005jvs7hlv4v1jf9nd8uow99n7zqdggxcixahxio0kjuht995ynro67peqmtj3a1x08qdjnd2f3vrgejbbejxojwvu86duencgs9sjanwrufr4fx01wz7bmggu2pgn4d9iwyakenocciwamho695a08os5jvlydixww2q3zynefa7lr041jiqinyudebz4muephxvyke3s4fsxta528qy0p21hyzoefgpg9xyxj80z1s2vvuqq60jom26vbgjsdfxfypj7lmj86494l2l0i01clyaxgvh028oqjkf5i3upqjduyajgrup2gfg6evh59asu1gbf8n74z09wqe3hs2opsu5vjnowv4edkx25tmqjjrpv5ilwk1szi1d5h5g5wlf262rh9wojbk0yjet5wosf1gealp0a2mmz2kxigpoi266llulv7z2bsdgvjah3a5yv476b4hkl9nisv2jdxyrk1hy47fzw3qedqcv5ulwzqv6n1mjt41fvbt3wfe1bxsv09msh9yqay4aqipyf7n4kyhwvo364kkla1x6ogi25rnc4hieitvtcb1rtmqcq766dv03z8lzpieo7gbqpnb0uuqlaemn56d5zctcydu09mzfrt1wjzbcwm8aqojh5313r0s9tqo7k46cx6rtxpj11vh3ud9ab522fbm5x1n90w6u63m3a3c885lze8ozabhpfd0ugk5hjwdqxv49vcs7jyeaxn202lcj229evx1fz9wp0tjhol3fu59kiq2cox10831ef2nfm9t04gdlysegkg4152vqxp2e69z9ozny5culhfav0r9kloi2ztq0a3zviyer767pn1ikqjh2rovhs4dim089u29eh824xkvj0s5gvloqklxgym68qbtj01gnygk37yf3gosi0bbthqy2zcugq9god9ke3od68wxyy5g3wi6fsg7hh00y9symyhno4ikvun8c2qy2v6xo9as6luyoryan4xe3vrqvm8bhj1yeyfuz66k9c7ch3nn5z5v0kvm8nkxe5066erc9tymcorglb5rlrsvh02ifq3fpyhlbuawbb2x4e4dv90xud37dyk19bkfxjvhlrg3bez5r1j5jh7x71lmp76pg6y6ajx3emw3ww0s6gwedn1np28pepj2zmzvxng0ix8rtb7vnf7ufik9udut2jpsszkqgsz4a2ty3azo21kdqyhwyao3yhp4qz83ic8g8uh7q127tmmsdxigxltacknxzevkisosr6g1n9bei1qo2u27jswx9k0zjsvfjd5068v6cr4cluivizuerknbn14d5eglhp0d4jagda9a4ljnilu6av54cvo2xl4lq1ykvuwqjq5hb59b3a1ol8hzdgaieg23harh7hweylw0ub3e6wzo67mcagqmua5e6wfd98efglyg5bg1r3u4z0v1tppt6757bc12eut9l5ow91tfdvh9wer4ti4w037zubwsh0aajqnfk79okz1cqhp3d158bdtaim3w1m49hf7a6v4oioolkxzezu6y7wy63lhe259kxr57td91ognyny7z17r17jqx9s4po44ksn6uniwkguwdiezh39o3exlv9yojawxb4zvcott5v4lg09yinvpyefjpynjpqj9370ecueyuxrs435ipmi0z1hoet51ofnhv7l5jai2fhpvnga8vonfwqbwnn1e0z84ldxww61eslc219zxy1u144itljq61t6n6kd4pqjnue76aih2aq2j1vbuv5ra9o5h1wp7lbqrlg0cw2e9d6npqydzbk9rfbwjhbu8zxvfk2wbqavhd5txyms0urkt9krn4cro017xrbl5s7i9w91qcrarj4jlf7sy9iwukgd7m55qcayhqb9yswdrtm9e2twlsvnxnop8rp1dzoan4wx2teqbmyheuojfq7hcdrvf3a9wqqge640fge0ritxoyupl5r3tiilj8g2ttrx42pe4utbakf8wdvr51qvrgi9rc296m01xbfmwu7kj70ce7bkhrmf30q9v7np6l7a142urqmz62tkgkn9u70t9q2z32t5rjpqp7tntlel0ac5ijjrnorjldri8ywfqec1fb8wgseihi9283naqxl9y9kqftobl9ojype7bqd1w7nvog85gmnq3hcf6311x724czni5wk3epzb5q50r1ffbjdgvr6odd4rx66pw8pnflh5zjwp75gq8ksvsmi7eydy0t5nynpr6aa16nyivy15vwc5gw1s6pq31lmn0pugripf37vxzfahb9jinor9z3oaxn1178gg9hhtotkqrdb0tr4b3viy0oky50b4ojmt4ifijg9ic47iut5vh71rimt6ghoon1h30azaw9tk15o7obg075f5zmorfsvgo9pas0hxndsve6bm7xuiqxivcnp1e3tuvs7eib93zn3thcaxccy03xotumjg3mm53l1naa3cf0lnm1z7u7x11uz0vup3gczl8rwjwoqhi27l4zlglw0cwg4e0pgfr4blz2wmqq8j50fkme721y7eebbznm7afy1y16jv8mw3h35mzqhl32jedmwqzzay863ibw62x40nf8d5awx2w7me9p7u4u5nn8i04etou3p3wbhtwlo3xf8x9bq9ymsy0csx7s5y75ni0kd372dqf4ac2frkmrj6hmh8qroplqslvuvuri98o4i2hi2e7ojpd91m62pldwws1np5nh3d5ip9gwk6j9juoenm2t7fny2vddn35o9fsei3mk6r11oes7j72sg4vd91ppss1ct7xun2mrg4g69j2g0dwtdlg0xdv7ryiftk78ay950fuf7lv4m4z6uixsmywlx3dtrz0v7t7eht25z7ya1cttzibte93k7efmbdz4pok1jhwp5gkq4314j20p4kusee4vnt7jqcc2c4to4p8g1dctb2lhuutlcknn9dxhs4atb5sze18s48zs7r2j46ln30svrrf73widoqsj4w24k366ot91dcwvd4fhtmhcrbvhhwwaigmobb9dbeu0t5urkp020g066u64lp38he7f7lt2rgj4l6ml5399piwhce15bmt093tdxkseyxxbj5kmvgdbfmzxosub8pa8pcnzgwak6ohfblbg5qoqlgcxwf3trhj3xihcznbv1bvy2x0ly2be5zzx6gbkzbt5tl3zohysxh2l3rr6s24syp450gwrlun0h9x1l0isctwu0vh7ccdzob2q9ncojftmetnt3dpywbvnd009rqet6c3v3tjqfe9auj5fh2lphtktg61xibl06nmuq9wrxbz0ad5sltft8dli3mtbhf8acojupnpgz0c3whdrb310uecren4h2e8fg8wk8napojxacgjjeatorek8pfbjqtz48amwbo53v63lzf4vectstpil74su5wgb3viut0rowf68sea1uul0mhaor2xsmlj9wwilveu5gh7f58rn1p1snosjebxs7z7r5tk0f63t8w57968333t8m17ufxi8ens0chvdf2xqctdmodqd3gf90h27pkvtd16255rzaa01xcdvvvabw4q2kian86majpfs4evcc3krtjru2wayf0jy7j6epq3ubygscbakzbk2tqgt8ovpo18fwnejb0o0jupb0d9zjov4zbkrtzsbbrbfqd3solfbndtfizdymsfs3b40jxiqogsmpqs0qp3ijz9ousq9wukc 00:12:06.848 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:12:06.848 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:12:06.848 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:12:06.848 14:38:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:12:06.848 { 00:12:06.848 "subsystems": [ 00:12:06.848 { 00:12:06.848 "subsystem": "bdev", 00:12:06.848 "config": [ 00:12:06.848 { 00:12:06.848 "params": { 00:12:06.848 "trtype": "pcie", 00:12:06.848 "traddr": "0000:00:10.0", 00:12:06.848 "name": "Nvme0" 00:12:06.848 }, 00:12:06.848 "method": "bdev_nvme_attach_controller" 00:12:06.848 }, 00:12:06.848 { 00:12:06.848 "method": "bdev_wait_for_examine" 00:12:06.848 } 00:12:06.848 ] 00:12:06.848 } 00:12:06.848 ] 00:12:06.848 } 00:12:06.848 [2024-11-04 14:38:15.961255] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:06.848 [2024-11-04 14:38:15.961311] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59205 ] 00:12:07.106 [2024-11-04 14:38:16.101940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.106 [2024-11-04 14:38:16.138875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.106 [2024-11-04 14:38:16.170668] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:07.363  [2024-11-04T14:38:16.503Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:12:07.363 00:12:07.363 14:38:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:12:07.363 14:38:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:12:07.363 14:38:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:12:07.363 14:38:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:12:07.363 [2024-11-04 14:38:16.394940] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:07.363 [2024-11-04 14:38:16.394995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59218 ] 00:12:07.363 { 00:12:07.363 "subsystems": [ 00:12:07.363 { 00:12:07.363 "subsystem": "bdev", 00:12:07.363 "config": [ 00:12:07.363 { 00:12:07.363 "params": { 00:12:07.363 "trtype": "pcie", 00:12:07.363 "traddr": "0000:00:10.0", 00:12:07.363 "name": "Nvme0" 00:12:07.363 }, 00:12:07.363 "method": "bdev_nvme_attach_controller" 00:12:07.363 }, 00:12:07.363 { 00:12:07.363 "method": "bdev_wait_for_examine" 00:12:07.363 } 00:12:07.363 ] 00:12:07.363 } 00:12:07.363 ] 00:12:07.363 } 00:12:07.621 [2024-11-04 14:38:16.529486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.621 [2024-11-04 14:38:16.560062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.621 [2024-11-04 14:38:16.587923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:07.621  [2024-11-04T14:38:17.019Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:12:07.880 00:12:07.880 14:38:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:12:07.880 14:38:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 78mbdutwao9ixxnx81ne67bic7zam0tyo57s89rlhukwbzkagdbtcsc1kkw3lrpi64xmh1wgbma2rm7i694hnbtpk9lubvdmct207aleszljo162a74bxrismcc2j9cmozhxw1ksfnozmr5kolsfbnbk26uij1tjzstvav6lgcxeau80jntx98j8xwnsbqvaf9eev0rht825h4jo89klgx2srj9rwwod7aaiziwetxolipw42mj96tr2tizy084bq0a20kk2b7m7jq966ynnqmcqjf5b29ojb4vpx36f3dz8hqi7et8m3apf9cv8gf91elyk3jbwn2ll602pllcxs88n179z5av74bz3kdtx005jvs7hlv4v1jf9nd8uow99n7zqdggxcixahxio0kjuht995ynro67peqmtj3a1x08qdjnd2f3vrgejbbejxojwvu86duencgs9sjanwrufr4fx01wz7bmggu2pgn4d9iwyakenocciwamho695a08os5jvlydixww2q3zynefa7lr041jiqinyudebz4muephxvyke3s4fsxta528qy0p21hyzoefgpg9xyxj80z1s2vvuqq60jom26vbgjsdfxfypj7lmj86494l2l0i01clyaxgvh028oqjkf5i3upqjduyajgrup2gfg6evh59asu1gbf8n74z09wqe3hs2opsu5vjnowv4edkx25tmqjjrpv5ilwk1szi1d5h5g5wlf262rh9wojbk0yjet5wosf1gealp0a2mmz2kxigpoi266llulv7z2bsdgvjah3a5yv476b4hkl9nisv2jdxyrk1hy47fzw3qedqcv5ulwzqv6n1mjt41fvbt3wfe1bxsv09msh9yqay4aqipyf7n4kyhwvo364kkla1x6ogi25rnc4hieitvtcb1rtmqcq766dv03z8lzpieo7gbqpnb0uuqlaemn56d5zctcydu09mzfrt1wjzbcwm8aqojh5313r0s9tqo7k46cx6rtxpj11vh3ud9ab522fbm5x1n90w6u63m3a3c885lze8ozabhpfd0ugk5hjwdqxv49vcs7jyeaxn202lcj229evx1fz9wp0tjhol3fu59kiq2cox10831ef2nfm9t04gdlysegkg4152vqxp2e69z9ozny5culhfav0r9kloi2ztq0a3zviyer767pn1ikqjh2rovhs4dim089u29eh824xkvj0s5gvloqklxgym68qbtj01gnygk37yf3gosi0bbthqy2zcugq9god9ke3od68wxyy5g3wi6fsg7hh00y9symyhno4ikvun8c2qy2v6xo9as6luyoryan4xe3vrqvm8bhj1yeyfuz66k9c7ch3nn5z5v0kvm8nkxe5066erc9tymcorglb5rlrsvh02ifq3fpyhlbuawbb2x4e4dv90xud37dyk19bkfxjvhlrg3bez5r1j5jh7x71lmp76pg6y6ajx3emw3ww0s6gwedn1np28pepj2zmzvxng0ix8rtb7vnf7ufik9udut2jpsszkqgsz4a2ty3azo21kdqyhwyao3yhp4qz83ic8g8uh7q127tmmsdxigxltacknxzevkisosr6g1n9bei1qo2u27jswx9k0zjsvfjd5068v6cr4cluivizuerknbn14d5eglhp0d4jagda9a4ljnilu6av54cvo2xl4lq1ykvuwqjq5hb59b3a1ol8hzdgaieg23harh7hweylw0ub3e6wzo67mcagqmua5e6wfd98efglyg5bg1r3u4z0v1tppt6757bc12eut9l5ow91tfdvh9wer4ti4w037zubwsh0aajqnfk79okz1cqhp3d158bdtaim3w1m49hf7a6v4oioolkxzezu6y7wy63lhe259kxr57td91ognyny7z17r17jqx9s4po44ksn6uniwkguwdiezh39o3exlv9yojawxb4zvcott5v4lg09yinvpyefjpynjpqj9370ecueyuxrs435ipmi0z1hoet51ofnhv7l5jai2fhpvnga8vonfwqbwnn1e0z84ldxww61eslc219zxy1u144itljq61t6n6kd4pqjnue76aih2aq2j1vbuv5ra9o5h1wp7lbqrlg0cw2e9d6npqydzbk9rfbwjhbu8zxvfk2wbqavhd5txyms0urkt9krn4cro017xrbl5s7i9w91qcrarj4jlf7sy9iwukgd7m55qcayhqb9yswdrtm9e2twlsvnxnop8rp1dzoan4wx2teqbmyheuojfq7hcdrvf3a9wqqge640fge0ritxoyupl5r3tiilj8g2ttrx42pe4utbakf8wdvr51qvrgi9rc296m01xbfmwu7kj70ce7bkhrmf30q9v7np6l7a142urqmz62tkgkn9u70t9q2z32t5rjpqp7tntlel0ac5ijjrnorjldri8ywfqec1fb8wgseihi9283naqxl9y9kqftobl9ojype7bqd1w7nvog85gmnq3hcf6311x724czni5wk3epzb5q50r1ffbjdgvr6odd4rx66pw8pnflh5zjwp75gq8ksvsmi7eydy0t5nynpr6aa16nyivy15vwc5gw1s6pq31lmn0pugripf37vxzfahb9jinor9z3oaxn1178gg9hhtotkqrdb0tr4b3viy0oky50b4ojmt4ifijg9ic47iut5vh71rimt6ghoon1h30azaw9tk15o7obg075f5zmorfsvgo9pas0hxndsve6bm7xuiqxivcnp1e3tuvs7eib93zn3thcaxccy03xotumjg3mm53l1naa3cf0lnm1z7u7x11uz0vup3gczl8rwjwoqhi27l4zlglw0cwg4e0pgfr4blz2wmqq8j50fkme721y7eebbznm7afy1y16jv8mw3h35mzqhl32jedmwqzzay863ibw62x40nf8d5awx2w7me9p7u4u5nn8i04etou3p3wbhtwlo3xf8x9bq9ymsy0csx7s5y75ni0kd372dqf4ac2frkmrj6hmh8qroplqslvuvuri98o4i2hi2e7ojpd91m62pldwws1np5nh3d5ip9gwk6j9juoenm2t7fny2vddn35o9fsei3mk6r11oes7j72sg4vd91ppss1ct7xun2mrg4g69j2g0dwtdlg0xdv7ryiftk78ay950fuf7lv4m4z6uixsmywlx3dtrz0v7t7eht25z7ya1cttzibte93k7efmbdz4pok1jhwp5gkq4314j20p4kusee4vnt7jqcc2c4to4p8g1dctb2lhuutlcknn9dxhs4atb5sze18s48zs7r2j46ln30svrrf73widoqsj4w24k366ot91dcwvd4fhtmhcrbvhhwwaigmobb9dbeu0t5urkp020g066u64lp38he7f7lt2rgj4l6ml5399piwhce15bmt093tdxkseyxxbj5kmvgdbfmzxosub8pa8pcnzgwak6ohfblbg5qoqlgcxwf3trhj3xihcznbv1bvy2x0ly2be5zzx6gbkzbt5tl3zohysxh2l3rr6s24syp450gwrlun0h9x1l0isctwu0vh7ccdzob2q9ncojftmetnt3dpywbvnd009rqet6c3v3tjqfe9auj5fh2lphtktg61xibl06nmuq9wrxbz0ad5sltft8dli3mtbhf8acojupnpgz0c3whdrb310uecren4h2e8fg8wk8napojxacgjjeatorek8pfbjqtz48amwbo53v63lzf4vectstpil74su5wgb3viut0rowf68sea1uul0mhaor2xsmlj9wwilveu5gh7f58rn1p1snosjebxs7z7r5tk0f63t8w57968333t8m17ufxi8ens0chvdf2xqctdmodqd3gf90h27pkvtd16255rzaa01xcdvvvabw4q2kian86majpfs4evcc3krtjru2wayf0jy7j6epq3ubygscbakzbk2tqgt8ovpo18fwnejb0o0jupb0d9zjov4zbkrtzsbbrbfqd3solfbndtfizdymsfs3b40jxiqogsmpqs0qp3ijz9ousq9wukc == \7\8\m\b\d\u\t\w\a\o\9\i\x\x\n\x\8\1\n\e\6\7\b\i\c\7\z\a\m\0\t\y\o\5\7\s\8\9\r\l\h\u\k\w\b\z\k\a\g\d\b\t\c\s\c\1\k\k\w\3\l\r\p\i\6\4\x\m\h\1\w\g\b\m\a\2\r\m\7\i\6\9\4\h\n\b\t\p\k\9\l\u\b\v\d\m\c\t\2\0\7\a\l\e\s\z\l\j\o\1\6\2\a\7\4\b\x\r\i\s\m\c\c\2\j\9\c\m\o\z\h\x\w\1\k\s\f\n\o\z\m\r\5\k\o\l\s\f\b\n\b\k\2\6\u\i\j\1\t\j\z\s\t\v\a\v\6\l\g\c\x\e\a\u\8\0\j\n\t\x\9\8\j\8\x\w\n\s\b\q\v\a\f\9\e\e\v\0\r\h\t\8\2\5\h\4\j\o\8\9\k\l\g\x\2\s\r\j\9\r\w\w\o\d\7\a\a\i\z\i\w\e\t\x\o\l\i\p\w\4\2\m\j\9\6\t\r\2\t\i\z\y\0\8\4\b\q\0\a\2\0\k\k\2\b\7\m\7\j\q\9\6\6\y\n\n\q\m\c\q\j\f\5\b\2\9\o\j\b\4\v\p\x\3\6\f\3\d\z\8\h\q\i\7\e\t\8\m\3\a\p\f\9\c\v\8\g\f\9\1\e\l\y\k\3\j\b\w\n\2\l\l\6\0\2\p\l\l\c\x\s\8\8\n\1\7\9\z\5\a\v\7\4\b\z\3\k\d\t\x\0\0\5\j\v\s\7\h\l\v\4\v\1\j\f\9\n\d\8\u\o\w\9\9\n\7\z\q\d\g\g\x\c\i\x\a\h\x\i\o\0\k\j\u\h\t\9\9\5\y\n\r\o\6\7\p\e\q\m\t\j\3\a\1\x\0\8\q\d\j\n\d\2\f\3\v\r\g\e\j\b\b\e\j\x\o\j\w\v\u\8\6\d\u\e\n\c\g\s\9\s\j\a\n\w\r\u\f\r\4\f\x\0\1\w\z\7\b\m\g\g\u\2\p\g\n\4\d\9\i\w\y\a\k\e\n\o\c\c\i\w\a\m\h\o\6\9\5\a\0\8\o\s\5\j\v\l\y\d\i\x\w\w\2\q\3\z\y\n\e\f\a\7\l\r\0\4\1\j\i\q\i\n\y\u\d\e\b\z\4\m\u\e\p\h\x\v\y\k\e\3\s\4\f\s\x\t\a\5\2\8\q\y\0\p\2\1\h\y\z\o\e\f\g\p\g\9\x\y\x\j\8\0\z\1\s\2\v\v\u\q\q\6\0\j\o\m\2\6\v\b\g\j\s\d\f\x\f\y\p\j\7\l\m\j\8\6\4\9\4\l\2\l\0\i\0\1\c\l\y\a\x\g\v\h\0\2\8\o\q\j\k\f\5\i\3\u\p\q\j\d\u\y\a\j\g\r\u\p\2\g\f\g\6\e\v\h\5\9\a\s\u\1\g\b\f\8\n\7\4\z\0\9\w\q\e\3\h\s\2\o\p\s\u\5\v\j\n\o\w\v\4\e\d\k\x\2\5\t\m\q\j\j\r\p\v\5\i\l\w\k\1\s\z\i\1\d\5\h\5\g\5\w\l\f\2\6\2\r\h\9\w\o\j\b\k\0\y\j\e\t\5\w\o\s\f\1\g\e\a\l\p\0\a\2\m\m\z\2\k\x\i\g\p\o\i\2\6\6\l\l\u\l\v\7\z\2\b\s\d\g\v\j\a\h\3\a\5\y\v\4\7\6\b\4\h\k\l\9\n\i\s\v\2\j\d\x\y\r\k\1\h\y\4\7\f\z\w\3\q\e\d\q\c\v\5\u\l\w\z\q\v\6\n\1\m\j\t\4\1\f\v\b\t\3\w\f\e\1\b\x\s\v\0\9\m\s\h\9\y\q\a\y\4\a\q\i\p\y\f\7\n\4\k\y\h\w\v\o\3\6\4\k\k\l\a\1\x\6\o\g\i\2\5\r\n\c\4\h\i\e\i\t\v\t\c\b\1\r\t\m\q\c\q\7\6\6\d\v\0\3\z\8\l\z\p\i\e\o\7\g\b\q\p\n\b\0\u\u\q\l\a\e\m\n\5\6\d\5\z\c\t\c\y\d\u\0\9\m\z\f\r\t\1\w\j\z\b\c\w\m\8\a\q\o\j\h\5\3\1\3\r\0\s\9\t\q\o\7\k\4\6\c\x\6\r\t\x\p\j\1\1\v\h\3\u\d\9\a\b\5\2\2\f\b\m\5\x\1\n\9\0\w\6\u\6\3\m\3\a\3\c\8\8\5\l\z\e\8\o\z\a\b\h\p\f\d\0\u\g\k\5\h\j\w\d\q\x\v\4\9\v\c\s\7\j\y\e\a\x\n\2\0\2\l\c\j\2\2\9\e\v\x\1\f\z\9\w\p\0\t\j\h\o\l\3\f\u\5\9\k\i\q\2\c\o\x\1\0\8\3\1\e\f\2\n\f\m\9\t\0\4\g\d\l\y\s\e\g\k\g\4\1\5\2\v\q\x\p\2\e\6\9\z\9\o\z\n\y\5\c\u\l\h\f\a\v\0\r\9\k\l\o\i\2\z\t\q\0\a\3\z\v\i\y\e\r\7\6\7\p\n\1\i\k\q\j\h\2\r\o\v\h\s\4\d\i\m\0\8\9\u\2\9\e\h\8\2\4\x\k\v\j\0\s\5\g\v\l\o\q\k\l\x\g\y\m\6\8\q\b\t\j\0\1\g\n\y\g\k\3\7\y\f\3\g\o\s\i\0\b\b\t\h\q\y\2\z\c\u\g\q\9\g\o\d\9\k\e\3\o\d\6\8\w\x\y\y\5\g\3\w\i\6\f\s\g\7\h\h\0\0\y\9\s\y\m\y\h\n\o\4\i\k\v\u\n\8\c\2\q\y\2\v\6\x\o\9\a\s\6\l\u\y\o\r\y\a\n\4\x\e\3\v\r\q\v\m\8\b\h\j\1\y\e\y\f\u\z\6\6\k\9\c\7\c\h\3\n\n\5\z\5\v\0\k\v\m\8\n\k\x\e\5\0\6\6\e\r\c\9\t\y\m\c\o\r\g\l\b\5\r\l\r\s\v\h\0\2\i\f\q\3\f\p\y\h\l\b\u\a\w\b\b\2\x\4\e\4\d\v\9\0\x\u\d\3\7\d\y\k\1\9\b\k\f\x\j\v\h\l\r\g\3\b\e\z\5\r\1\j\5\j\h\7\x\7\1\l\m\p\7\6\p\g\6\y\6\a\j\x\3\e\m\w\3\w\w\0\s\6\g\w\e\d\n\1\n\p\2\8\p\e\p\j\2\z\m\z\v\x\n\g\0\i\x\8\r\t\b\7\v\n\f\7\u\f\i\k\9\u\d\u\t\2\j\p\s\s\z\k\q\g\s\z\4\a\2\t\y\3\a\z\o\2\1\k\d\q\y\h\w\y\a\o\3\y\h\p\4\q\z\8\3\i\c\8\g\8\u\h\7\q\1\2\7\t\m\m\s\d\x\i\g\x\l\t\a\c\k\n\x\z\e\v\k\i\s\o\s\r\6\g\1\n\9\b\e\i\1\q\o\2\u\2\7\j\s\w\x\9\k\0\z\j\s\v\f\j\d\5\0\6\8\v\6\c\r\4\c\l\u\i\v\i\z\u\e\r\k\n\b\n\1\4\d\5\e\g\l\h\p\0\d\4\j\a\g\d\a\9\a\4\l\j\n\i\l\u\6\a\v\5\4\c\v\o\2\x\l\4\l\q\1\y\k\v\u\w\q\j\q\5\h\b\5\9\b\3\a\1\o\l\8\h\z\d\g\a\i\e\g\2\3\h\a\r\h\7\h\w\e\y\l\w\0\u\b\3\e\6\w\z\o\6\7\m\c\a\g\q\m\u\a\5\e\6\w\f\d\9\8\e\f\g\l\y\g\5\b\g\1\r\3\u\4\z\0\v\1\t\p\p\t\6\7\5\7\b\c\1\2\e\u\t\9\l\5\o\w\9\1\t\f\d\v\h\9\w\e\r\4\t\i\4\w\0\3\7\z\u\b\w\s\h\0\a\a\j\q\n\f\k\7\9\o\k\z\1\c\q\h\p\3\d\1\5\8\b\d\t\a\i\m\3\w\1\m\4\9\h\f\7\a\6\v\4\o\i\o\o\l\k\x\z\e\z\u\6\y\7\w\y\6\3\l\h\e\2\5\9\k\x\r\5\7\t\d\9\1\o\g\n\y\n\y\7\z\1\7\r\1\7\j\q\x\9\s\4\p\o\4\4\k\s\n\6\u\n\i\w\k\g\u\w\d\i\e\z\h\3\9\o\3\e\x\l\v\9\y\o\j\a\w\x\b\4\z\v\c\o\t\t\5\v\4\l\g\0\9\y\i\n\v\p\y\e\f\j\p\y\n\j\p\q\j\9\3\7\0\e\c\u\e\y\u\x\r\s\4\3\5\i\p\m\i\0\z\1\h\o\e\t\5\1\o\f\n\h\v\7\l\5\j\a\i\2\f\h\p\v\n\g\a\8\v\o\n\f\w\q\b\w\n\n\1\e\0\z\8\4\l\d\x\w\w\6\1\e\s\l\c\2\1\9\z\x\y\1\u\1\4\4\i\t\l\j\q\6\1\t\6\n\6\k\d\4\p\q\j\n\u\e\7\6\a\i\h\2\a\q\2\j\1\v\b\u\v\5\r\a\9\o\5\h\1\w\p\7\l\b\q\r\l\g\0\c\w\2\e\9\d\6\n\p\q\y\d\z\b\k\9\r\f\b\w\j\h\b\u\8\z\x\v\f\k\2\w\b\q\a\v\h\d\5\t\x\y\m\s\0\u\r\k\t\9\k\r\n\4\c\r\o\0\1\7\x\r\b\l\5\s\7\i\9\w\9\1\q\c\r\a\r\j\4\j\l\f\7\s\y\9\i\w\u\k\g\d\7\m\5\5\q\c\a\y\h\q\b\9\y\s\w\d\r\t\m\9\e\2\t\w\l\s\v\n\x\n\o\p\8\r\p\1\d\z\o\a\n\4\w\x\2\t\e\q\b\m\y\h\e\u\o\j\f\q\7\h\c\d\r\v\f\3\a\9\w\q\q\g\e\6\4\0\f\g\e\0\r\i\t\x\o\y\u\p\l\5\r\3\t\i\i\l\j\8\g\2\t\t\r\x\4\2\p\e\4\u\t\b\a\k\f\8\w\d\v\r\5\1\q\v\r\g\i\9\r\c\2\9\6\m\0\1\x\b\f\m\w\u\7\k\j\7\0\c\e\7\b\k\h\r\m\f\3\0\q\9\v\7\n\p\6\l\7\a\1\4\2\u\r\q\m\z\6\2\t\k\g\k\n\9\u\7\0\t\9\q\2\z\3\2\t\5\r\j\p\q\p\7\t\n\t\l\e\l\0\a\c\5\i\j\j\r\n\o\r\j\l\d\r\i\8\y\w\f\q\e\c\1\f\b\8\w\g\s\e\i\h\i\9\2\8\3\n\a\q\x\l\9\y\9\k\q\f\t\o\b\l\9\o\j\y\p\e\7\b\q\d\1\w\7\n\v\o\g\8\5\g\m\n\q\3\h\c\f\6\3\1\1\x\7\2\4\c\z\n\i\5\w\k\3\e\p\z\b\5\q\5\0\r\1\f\f\b\j\d\g\v\r\6\o\d\d\4\r\x\6\6\p\w\8\p\n\f\l\h\5\z\j\w\p\7\5\g\q\8\k\s\v\s\m\i\7\e\y\d\y\0\t\5\n\y\n\p\r\6\a\a\1\6\n\y\i\v\y\1\5\v\w\c\5\g\w\1\s\6\p\q\3\1\l\m\n\0\p\u\g\r\i\p\f\3\7\v\x\z\f\a\h\b\9\j\i\n\o\r\9\z\3\o\a\x\n\1\1\7\8\g\g\9\h\h\t\o\t\k\q\r\d\b\0\t\r\4\b\3\v\i\y\0\o\k\y\5\0\b\4\o\j\m\t\4\i\f\i\j\g\9\i\c\4\7\i\u\t\5\v\h\7\1\r\i\m\t\6\g\h\o\o\n\1\h\3\0\a\z\a\w\9\t\k\1\5\o\7\o\b\g\0\7\5\f\5\z\m\o\r\f\s\v\g\o\9\p\a\s\0\h\x\n\d\s\v\e\6\b\m\7\x\u\i\q\x\i\v\c\n\p\1\e\3\t\u\v\s\7\e\i\b\9\3\z\n\3\t\h\c\a\x\c\c\y\0\3\x\o\t\u\m\j\g\3\m\m\5\3\l\1\n\a\a\3\c\f\0\l\n\m\1\z\7\u\7\x\1\1\u\z\0\v\u\p\3\g\c\z\l\8\r\w\j\w\o\q\h\i\2\7\l\4\z\l\g\l\w\0\c\w\g\4\e\0\p\g\f\r\4\b\l\z\2\w\m\q\q\8\j\5\0\f\k\m\e\7\2\1\y\7\e\e\b\b\z\n\m\7\a\f\y\1\y\1\6\j\v\8\m\w\3\h\3\5\m\z\q\h\l\3\2\j\e\d\m\w\q\z\z\a\y\8\6\3\i\b\w\6\2\x\4\0\n\f\8\d\5\a\w\x\2\w\7\m\e\9\p\7\u\4\u\5\n\n\8\i\0\4\e\t\o\u\3\p\3\w\b\h\t\w\l\o\3\x\f\8\x\9\b\q\9\y\m\s\y\0\c\s\x\7\s\5\y\7\5\n\i\0\k\d\3\7\2\d\q\f\4\a\c\2\f\r\k\m\r\j\6\h\m\h\8\q\r\o\p\l\q\s\l\v\u\v\u\r\i\9\8\o\4\i\2\h\i\2\e\7\o\j\p\d\9\1\m\6\2\p\l\d\w\w\s\1\n\p\5\n\h\3\d\5\i\p\9\g\w\k\6\j\9\j\u\o\e\n\m\2\t\7\f\n\y\2\v\d\d\n\3\5\o\9\f\s\e\i\3\m\k\6\r\1\1\o\e\s\7\j\7\2\s\g\4\v\d\9\1\p\p\s\s\1\c\t\7\x\u\n\2\m\r\g\4\g\6\9\j\2\g\0\d\w\t\d\l\g\0\x\d\v\7\r\y\i\f\t\k\7\8\a\y\9\5\0\f\u\f\7\l\v\4\m\4\z\6\u\i\x\s\m\y\w\l\x\3\d\t\r\z\0\v\7\t\7\e\h\t\2\5\z\7\y\a\1\c\t\t\z\i\b\t\e\9\3\k\7\e\f\m\b\d\z\4\p\o\k\1\j\h\w\p\5\g\k\q\4\3\1\4\j\2\0\p\4\k\u\s\e\e\4\v\n\t\7\j\q\c\c\2\c\4\t\o\4\p\8\g\1\d\c\t\b\2\l\h\u\u\t\l\c\k\n\n\9\d\x\h\s\4\a\t\b\5\s\z\e\1\8\s\4\8\z\s\7\r\2\j\4\6\l\n\3\0\s\v\r\r\f\7\3\w\i\d\o\q\s\j\4\w\2\4\k\3\6\6\o\t\9\1\d\c\w\v\d\4\f\h\t\m\h\c\r\b\v\h\h\w\w\a\i\g\m\o\b\b\9\d\b\e\u\0\t\5\u\r\k\p\0\2\0\g\0\6\6\u\6\4\l\p\3\8\h\e\7\f\7\l\t\2\r\g\j\4\l\6\m\l\5\3\9\9\p\i\w\h\c\e\1\5\b\m\t\0\9\3\t\d\x\k\s\e\y\x\x\b\j\5\k\m\v\g\d\b\f\m\z\x\o\s\u\b\8\p\a\8\p\c\n\z\g\w\a\k\6\o\h\f\b\l\b\g\5\q\o\q\l\g\c\x\w\f\3\t\r\h\j\3\x\i\h\c\z\n\b\v\1\b\v\y\2\x\0\l\y\2\b\e\5\z\z\x\6\g\b\k\z\b\t\5\t\l\3\z\o\h\y\s\x\h\2\l\3\r\r\6\s\2\4\s\y\p\4\5\0\g\w\r\l\u\n\0\h\9\x\1\l\0\i\s\c\t\w\u\0\v\h\7\c\c\d\z\o\b\2\q\9\n\c\o\j\f\t\m\e\t\n\t\3\d\p\y\w\b\v\n\d\0\0\9\r\q\e\t\6\c\3\v\3\t\j\q\f\e\9\a\u\j\5\f\h\2\l\p\h\t\k\t\g\6\1\x\i\b\l\0\6\n\m\u\q\9\w\r\x\b\z\0\a\d\5\s\l\t\f\t\8\d\l\i\3\m\t\b\h\f\8\a\c\o\j\u\p\n\p\g\z\0\c\3\w\h\d\r\b\3\1\0\u\e\c\r\e\n\4\h\2\e\8\f\g\8\w\k\8\n\a\p\o\j\x\a\c\g\j\j\e\a\t\o\r\e\k\8\p\f\b\j\q\t\z\4\8\a\m\w\b\o\5\3\v\6\3\l\z\f\4\v\e\c\t\s\t\p\i\l\7\4\s\u\5\w\g\b\3\v\i\u\t\0\r\o\w\f\6\8\s\e\a\1\u\u\l\0\m\h\a\o\r\2\x\s\m\l\j\9\w\w\i\l\v\e\u\5\g\h\7\f\5\8\r\n\1\p\1\s\n\o\s\j\e\b\x\s\7\z\7\r\5\t\k\0\f\6\3\t\8\w\5\7\9\6\8\3\3\3\t\8\m\1\7\u\f\x\i\8\e\n\s\0\c\h\v\d\f\2\x\q\c\t\d\m\o\d\q\d\3\g\f\9\0\h\2\7\p\k\v\t\d\1\6\2\5\5\r\z\a\a\0\1\x\c\d\v\v\v\a\b\w\4\q\2\k\i\a\n\8\6\m\a\j\p\f\s\4\e\v\c\c\3\k\r\t\j\r\u\2\w\a\y\f\0\j\y\7\j\6\e\p\q\3\u\b\y\g\s\c\b\a\k\z\b\k\2\t\q\g\t\8\o\v\p\o\1\8\f\w\n\e\j\b\0\o\0\j\u\p\b\0\d\9\z\j\o\v\4\z\b\k\r\t\z\s\b\b\r\b\f\q\d\3\s\o\l\f\b\n\d\t\f\i\z\d\y\m\s\f\s\3\b\4\0\j\x\i\q\o\g\s\m\p\q\s\0\q\p\3\i\j\z\9\o\u\s\q\9\w\u\k\c ]] 00:12:07.880 00:12:07.880 real 0m0.879s 00:12:07.880 user 0m0.564s 00:12:07.880 sys 0m0.332s 00:12:07.880 14:38:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:07.880 14:38:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:12:07.880 ************************************ 00:12:07.880 END TEST dd_rw_offset 00:12:07.880 ************************************ 00:12:07.880 14:38:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:12:07.880 14:38:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:12:07.880 14:38:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:12:07.880 14:38:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:12:07.880 14:38:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:12:07.880 14:38:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:12:07.880 14:38:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:12:07.880 14:38:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:12:07.880 14:38:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:12:07.880 14:38:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:12:07.880 14:38:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:12:07.880 [2024-11-04 14:38:16.841256] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:07.880 [2024-11-04 14:38:16.841317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59248 ] 00:12:07.880 { 00:12:07.880 "subsystems": [ 00:12:07.880 { 00:12:07.880 "subsystem": "bdev", 00:12:07.880 "config": [ 00:12:07.880 { 00:12:07.880 "params": { 00:12:07.880 "trtype": "pcie", 00:12:07.881 "traddr": "0000:00:10.0", 00:12:07.881 "name": "Nvme0" 00:12:07.881 }, 00:12:07.881 "method": "bdev_nvme_attach_controller" 00:12:07.881 }, 00:12:07.881 { 00:12:07.881 "method": "bdev_wait_for_examine" 00:12:07.881 } 00:12:07.881 ] 00:12:07.881 } 00:12:07.881 ] 00:12:07.881 } 00:12:07.881 [2024-11-04 14:38:16.979473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.881 [2024-11-04 14:38:17.016292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.172 [2024-11-04 14:38:17.048343] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:08.172  [2024-11-04T14:38:17.312Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:12:08.172 00:12:08.172 14:38:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:08.172 00:12:08.172 real 0m12.858s 00:12:08.172 user 0m8.825s 00:12:08.172 sys 0m4.193s 00:12:08.172 14:38:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:08.172 14:38:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:12:08.172 ************************************ 00:12:08.172 END TEST spdk_dd_basic_rw 00:12:08.172 ************************************ 00:12:08.429 14:38:17 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:12:08.429 14:38:17 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:08.429 14:38:17 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:08.429 14:38:17 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:08.429 ************************************ 00:12:08.429 START TEST spdk_dd_posix 00:12:08.429 ************************************ 00:12:08.429 14:38:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:12:08.429 * Looking for test storage... 00:12:08.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:08.429 14:38:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:08.429 14:38:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lcov --version 00:12:08.429 14:38:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:08.429 14:38:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:08.429 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.429 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.429 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.429 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.429 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.429 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.429 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.429 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.429 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.429 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:08.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.430 --rc genhtml_branch_coverage=1 00:12:08.430 --rc genhtml_function_coverage=1 00:12:08.430 --rc genhtml_legend=1 00:12:08.430 --rc geninfo_all_blocks=1 00:12:08.430 --rc geninfo_unexecuted_blocks=1 00:12:08.430 00:12:08.430 ' 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:08.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.430 --rc genhtml_branch_coverage=1 00:12:08.430 --rc genhtml_function_coverage=1 00:12:08.430 --rc genhtml_legend=1 00:12:08.430 --rc geninfo_all_blocks=1 00:12:08.430 --rc geninfo_unexecuted_blocks=1 00:12:08.430 00:12:08.430 ' 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:08.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.430 --rc genhtml_branch_coverage=1 00:12:08.430 --rc genhtml_function_coverage=1 00:12:08.430 --rc genhtml_legend=1 00:12:08.430 --rc geninfo_all_blocks=1 00:12:08.430 --rc geninfo_unexecuted_blocks=1 00:12:08.430 00:12:08.430 ' 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:08.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.430 --rc genhtml_branch_coverage=1 00:12:08.430 --rc genhtml_function_coverage=1 00:12:08.430 --rc genhtml_legend=1 00:12:08.430 --rc geninfo_all_blocks=1 00:12:08.430 --rc geninfo_unexecuted_blocks=1 00:12:08.430 00:12:08.430 ' 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:12:08.430 * First test run, liburing in use 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:12:08.430 ************************************ 00:12:08.430 START TEST dd_flag_append 00:12:08.430 ************************************ 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1127 -- # append 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=r4e805yl89ei7aokicpejrdql6jq8f1m 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=uosxfjf0te22hbt8sjmafdjf536swvnn 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s r4e805yl89ei7aokicpejrdql6jq8f1m 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s uosxfjf0te22hbt8sjmafdjf536swvnn 00:12:08.430 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:12:08.430 [2024-11-04 14:38:17.476904] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:08.430 [2024-11-04 14:38:17.476972] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59314 ] 00:12:08.687 [2024-11-04 14:38:17.617272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.687 [2024-11-04 14:38:17.653373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.687 [2024-11-04 14:38:17.684245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:08.687  [2024-11-04T14:38:17.827Z] Copying: 32/32 [B] (average 31 kBps) 00:12:08.687 00:12:08.687 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ uosxfjf0te22hbt8sjmafdjf536swvnnr4e805yl89ei7aokicpejrdql6jq8f1m == \u\o\s\x\f\j\f\0\t\e\2\2\h\b\t\8\s\j\m\a\f\d\j\f\5\3\6\s\w\v\n\n\r\4\e\8\0\5\y\l\8\9\e\i\7\a\o\k\i\c\p\e\j\r\d\q\l\6\j\q\8\f\1\m ]] 00:12:08.687 00:12:08.687 real 0m0.374s 00:12:08.687 user 0m0.191s 00:12:08.687 sys 0m0.151s 00:12:08.687 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:08.687 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:12:08.687 ************************************ 00:12:08.687 END TEST dd_flag_append 00:12:08.687 ************************************ 00:12:08.945 14:38:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:12:08.945 14:38:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:08.945 14:38:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:08.945 14:38:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:12:08.945 ************************************ 00:12:08.945 START TEST dd_flag_directory 00:12:08.945 ************************************ 00:12:08.945 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1127 -- # directory 00:12:08.945 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:08.945 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:12:08.945 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:08.945 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.945 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:08.945 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.945 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:08.945 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.945 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:08.945 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.945 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:08.945 14:38:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:08.945 [2024-11-04 14:38:17.888969] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:08.945 [2024-11-04 14:38:17.889043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59343 ] 00:12:08.945 [2024-11-04 14:38:18.023516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.945 [2024-11-04 14:38:18.062066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.202 [2024-11-04 14:38:18.093092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:09.202 [2024-11-04 14:38:18.117092] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:12:09.202 [2024-11-04 14:38:18.117136] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:12:09.202 [2024-11-04 14:38:18.117146] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:09.202 [2024-11-04 14:38:18.175376] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:09.202 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:12:09.202 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:09.202 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:12:09.202 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:12:09.202 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:12:09.202 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:09.202 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:12:09.202 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:12:09.202 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:12:09.202 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.202 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:09.202 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.202 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:09.202 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.202 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:09.202 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.202 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:09.202 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:12:09.202 [2024-11-04 14:38:18.256915] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:09.202 [2024-11-04 14:38:18.256982] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59352 ] 00:12:09.460 [2024-11-04 14:38:18.396148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.460 [2024-11-04 14:38:18.432457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.460 [2024-11-04 14:38:18.463337] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:09.460 [2024-11-04 14:38:18.486098] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:12:09.460 [2024-11-04 14:38:18.486137] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:12:09.460 [2024-11-04 14:38:18.486147] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:09.460 [2024-11-04 14:38:18.543347] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:09.460 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:12:09.460 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:09.460 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:12:09.460 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:12:09.460 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:12:09.460 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:09.460 00:12:09.460 real 0m0.736s 00:12:09.460 user 0m0.369s 00:12:09.460 sys 0m0.161s 00:12:09.460 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:09.460 ************************************ 00:12:09.460 END TEST dd_flag_directory 00:12:09.460 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:12:09.460 ************************************ 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:12:09.718 ************************************ 00:12:09.718 START TEST dd_flag_nofollow 00:12:09.718 ************************************ 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1127 -- # nofollow 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:09.718 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:09.718 [2024-11-04 14:38:18.668492] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:09.718 [2024-11-04 14:38:18.668554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59375 ] 00:12:09.718 [2024-11-04 14:38:18.808205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.718 [2024-11-04 14:38:18.844424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.978 [2024-11-04 14:38:18.875489] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:09.978 [2024-11-04 14:38:18.898584] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:12:09.978 [2024-11-04 14:38:18.898643] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:12:09.978 [2024-11-04 14:38:18.898653] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:09.978 [2024-11-04 14:38:18.956441] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:09.978 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:12:09.978 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:09.978 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:12:09.978 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:12:09.978 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:12:09.978 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:09.978 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:12:09.978 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:12:09.978 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:12:09.978 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.978 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:09.978 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.978 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:09.978 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.978 14:38:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:09.978 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.978 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:09.978 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:12:09.978 [2024-11-04 14:38:19.035100] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:09.978 [2024-11-04 14:38:19.035161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59385 ] 00:12:10.282 [2024-11-04 14:38:19.172123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.282 [2024-11-04 14:38:19.208283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.282 [2024-11-04 14:38:19.240358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:10.282 [2024-11-04 14:38:19.264861] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:12:10.282 [2024-11-04 14:38:19.264900] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:12:10.282 [2024-11-04 14:38:19.264911] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:10.282 [2024-11-04 14:38:19.323477] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:10.282 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:12:10.282 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:10.282 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:12:10.282 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:12:10.282 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:12:10.282 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:10.282 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:12:10.282 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:12:10.282 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:12:10.282 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:10.282 [2024-11-04 14:38:19.407594] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:10.282 [2024-11-04 14:38:19.407674] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59392 ] 00:12:10.540 [2024-11-04 14:38:19.547311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.540 [2024-11-04 14:38:19.583265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.540 [2024-11-04 14:38:19.614684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:10.540  [2024-11-04T14:38:19.938Z] Copying: 512/512 [B] (average 500 kBps) 00:12:10.798 00:12:10.798 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 5jnxgip8v48aoww7veiy9fuw4tma2nkb5b7tdp0efv007cnfzb8wmtfqig35q4qxsf7d5esd6g72ly72byz86zpc4cniwdhppn8u6h8s3vnnz3f5etcybqv4msp5xku8qohv2z4y1tdey8csa6fjet639i6k4icdvwp9vao87802u8qyd9okm2ljqai5jkxwuoiojoo2eakdh3td0wuyxvdkexah95r9lij7jdaf7bx43y42rteqt4wwbzwzscox92zw324zb2oyvexxeqh6irsf2to3bplgij3ru04m2m9w6zw1dq5s8gx1ga66r1oetv9hys65e9uuwqvcn1bt6g052a5lm7z4r425ltt7yaae8w2s7lizeo5igikqfb40pzkqtiq6j6fuh394ai73e0o42g05fvbrq2oazcdm9lqt4k2e18vf3j39vq8ie934ipmv5bu05ucfz2vno9qxsoh8h2hypes67f161fgt2rs0azb26kqhd2uv0xa53trh == \5\j\n\x\g\i\p\8\v\4\8\a\o\w\w\7\v\e\i\y\9\f\u\w\4\t\m\a\2\n\k\b\5\b\7\t\d\p\0\e\f\v\0\0\7\c\n\f\z\b\8\w\m\t\f\q\i\g\3\5\q\4\q\x\s\f\7\d\5\e\s\d\6\g\7\2\l\y\7\2\b\y\z\8\6\z\p\c\4\c\n\i\w\d\h\p\p\n\8\u\6\h\8\s\3\v\n\n\z\3\f\5\e\t\c\y\b\q\v\4\m\s\p\5\x\k\u\8\q\o\h\v\2\z\4\y\1\t\d\e\y\8\c\s\a\6\f\j\e\t\6\3\9\i\6\k\4\i\c\d\v\w\p\9\v\a\o\8\7\8\0\2\u\8\q\y\d\9\o\k\m\2\l\j\q\a\i\5\j\k\x\w\u\o\i\o\j\o\o\2\e\a\k\d\h\3\t\d\0\w\u\y\x\v\d\k\e\x\a\h\9\5\r\9\l\i\j\7\j\d\a\f\7\b\x\4\3\y\4\2\r\t\e\q\t\4\w\w\b\z\w\z\s\c\o\x\9\2\z\w\3\2\4\z\b\2\o\y\v\e\x\x\e\q\h\6\i\r\s\f\2\t\o\3\b\p\l\g\i\j\3\r\u\0\4\m\2\m\9\w\6\z\w\1\d\q\5\s\8\g\x\1\g\a\6\6\r\1\o\e\t\v\9\h\y\s\6\5\e\9\u\u\w\q\v\c\n\1\b\t\6\g\0\5\2\a\5\l\m\7\z\4\r\4\2\5\l\t\t\7\y\a\a\e\8\w\2\s\7\l\i\z\e\o\5\i\g\i\k\q\f\b\4\0\p\z\k\q\t\i\q\6\j\6\f\u\h\3\9\4\a\i\7\3\e\0\o\4\2\g\0\5\f\v\b\r\q\2\o\a\z\c\d\m\9\l\q\t\4\k\2\e\1\8\v\f\3\j\3\9\v\q\8\i\e\9\3\4\i\p\m\v\5\b\u\0\5\u\c\f\z\2\v\n\o\9\q\x\s\o\h\8\h\2\h\y\p\e\s\6\7\f\1\6\1\f\g\t\2\r\s\0\a\z\b\2\6\k\q\h\d\2\u\v\0\x\a\5\3\t\r\h ]] 00:12:10.798 00:12:10.798 real 0m1.114s 00:12:10.798 user 0m0.543s 00:12:10.798 sys 0m0.329s 00:12:10.798 ************************************ 00:12:10.798 END TEST dd_flag_nofollow 00:12:10.798 ************************************ 00:12:10.798 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:10.798 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:12:10.798 14:38:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:12:10.798 14:38:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:10.798 14:38:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:10.798 14:38:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:12:10.798 ************************************ 00:12:10.798 START TEST dd_flag_noatime 00:12:10.798 ************************************ 00:12:10.798 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1127 -- # noatime 00:12:10.798 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:12:10.798 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:12:10.798 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:12:10.798 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:12:10.798 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:12:10.798 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:10.798 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1730731099 00:12:10.798 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:10.798 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1730731099 00:12:10.798 14:38:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:12:11.768 14:38:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:11.768 [2024-11-04 14:38:20.834595] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:11.768 [2024-11-04 14:38:20.834662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59429 ] 00:12:12.025 [2024-11-04 14:38:20.969093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.025 [2024-11-04 14:38:21.009875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.025 [2024-11-04 14:38:21.042839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:12.025  [2024-11-04T14:38:21.424Z] Copying: 512/512 [B] (average 500 kBps) 00:12:12.284 00:12:12.284 14:38:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:12.284 14:38:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1730731099 )) 00:12:12.284 14:38:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:12.284 14:38:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1730731099 )) 00:12:12.284 14:38:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:12.284 [2024-11-04 14:38:21.203831] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:12.284 [2024-11-04 14:38:21.203890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59448 ] 00:12:12.284 [2024-11-04 14:38:21.342673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.284 [2024-11-04 14:38:21.379214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.284 [2024-11-04 14:38:21.410756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:12.541  [2024-11-04T14:38:21.681Z] Copying: 512/512 [B] (average 500 kBps) 00:12:12.541 00:12:12.541 14:38:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:12.541 14:38:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1730731101 )) 00:12:12.541 00:12:12.541 real 0m1.762s 00:12:12.541 user 0m0.379s 00:12:12.541 sys 0m0.318s 00:12:12.541 14:38:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:12.541 14:38:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:12:12.541 ************************************ 00:12:12.541 END TEST dd_flag_noatime 00:12:12.541 ************************************ 00:12:12.541 14:38:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:12:12.541 14:38:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:12.541 14:38:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:12.541 14:38:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:12:12.541 ************************************ 00:12:12.541 START TEST dd_flags_misc 00:12:12.541 ************************************ 00:12:12.541 14:38:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1127 -- # io 00:12:12.541 14:38:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:12:12.541 14:38:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:12:12.541 14:38:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:12:12.541 14:38:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:12:12.541 14:38:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:12:12.541 14:38:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:12:12.541 14:38:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:12:12.541 14:38:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:12.541 14:38:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:12:12.541 [2024-11-04 14:38:21.619793] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:12.542 [2024-11-04 14:38:21.619869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59471 ] 00:12:12.800 [2024-11-04 14:38:21.761421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.800 [2024-11-04 14:38:21.802121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.800 [2024-11-04 14:38:21.835878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:12.800  [2024-11-04T14:38:22.198Z] Copying: 512/512 [B] (average 500 kBps) 00:12:13.058 00:12:13.059 14:38:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ v2bh3n4qz80ztk6ynj477vstumx915wdneazacuoj1o8pmwkxllx9jlr5p65gtzyfdsm8m1wmrrfxsz8v13ryjky716x8cfiaiqdsbfpkxj5krj7sso783fwnz7ucne7u53s95qvfj2b0g6r9hx0rr7k7wvtxslz2fmgdke3qxfnjtvj172pcxp05wtcas9kayh7y1i6p4yurw625h66b5gjo8tdzevj6nmi9g16vf9y6urukyyg4hv3ds17zd5xpi14p1f8h8w04dryc3wb66iefp03j6frbgzzmirtaclpoo8r78o1rfhd1owzb0sm4juw6kkvqw19tvrhx4ecqwywhxsgx1yblufv5m35yi95wl3sg1mbrtozoayhcvy5iui1sol9fxesr75qms1lty6g3hudgouop4uux9zj8h57667vmf5nhcgqtv41k2nwqkpwnax2774uhxe665g48y985z94dfc6m3v8m7hx0xyt10987j1hyyrzvw6nlu4m == \v\2\b\h\3\n\4\q\z\8\0\z\t\k\6\y\n\j\4\7\7\v\s\t\u\m\x\9\1\5\w\d\n\e\a\z\a\c\u\o\j\1\o\8\p\m\w\k\x\l\l\x\9\j\l\r\5\p\6\5\g\t\z\y\f\d\s\m\8\m\1\w\m\r\r\f\x\s\z\8\v\1\3\r\y\j\k\y\7\1\6\x\8\c\f\i\a\i\q\d\s\b\f\p\k\x\j\5\k\r\j\7\s\s\o\7\8\3\f\w\n\z\7\u\c\n\e\7\u\5\3\s\9\5\q\v\f\j\2\b\0\g\6\r\9\h\x\0\r\r\7\k\7\w\v\t\x\s\l\z\2\f\m\g\d\k\e\3\q\x\f\n\j\t\v\j\1\7\2\p\c\x\p\0\5\w\t\c\a\s\9\k\a\y\h\7\y\1\i\6\p\4\y\u\r\w\6\2\5\h\6\6\b\5\g\j\o\8\t\d\z\e\v\j\6\n\m\i\9\g\1\6\v\f\9\y\6\u\r\u\k\y\y\g\4\h\v\3\d\s\1\7\z\d\5\x\p\i\1\4\p\1\f\8\h\8\w\0\4\d\r\y\c\3\w\b\6\6\i\e\f\p\0\3\j\6\f\r\b\g\z\z\m\i\r\t\a\c\l\p\o\o\8\r\7\8\o\1\r\f\h\d\1\o\w\z\b\0\s\m\4\j\u\w\6\k\k\v\q\w\1\9\t\v\r\h\x\4\e\c\q\w\y\w\h\x\s\g\x\1\y\b\l\u\f\v\5\m\3\5\y\i\9\5\w\l\3\s\g\1\m\b\r\t\o\z\o\a\y\h\c\v\y\5\i\u\i\1\s\o\l\9\f\x\e\s\r\7\5\q\m\s\1\l\t\y\6\g\3\h\u\d\g\o\u\o\p\4\u\u\x\9\z\j\8\h\5\7\6\6\7\v\m\f\5\n\h\c\g\q\t\v\4\1\k\2\n\w\q\k\p\w\n\a\x\2\7\7\4\u\h\x\e\6\6\5\g\4\8\y\9\8\5\z\9\4\d\f\c\6\m\3\v\8\m\7\h\x\0\x\y\t\1\0\9\8\7\j\1\h\y\y\r\z\v\w\6\n\l\u\4\m ]] 00:12:13.059 14:38:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:13.059 14:38:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:12:13.059 [2024-11-04 14:38:22.001347] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:13.059 [2024-11-04 14:38:22.001418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59481 ] 00:12:13.059 [2024-11-04 14:38:22.132846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.059 [2024-11-04 14:38:22.169375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.318 [2024-11-04 14:38:22.201052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:13.318  [2024-11-04T14:38:22.458Z] Copying: 512/512 [B] (average 500 kBps) 00:12:13.318 00:12:13.318 14:38:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ v2bh3n4qz80ztk6ynj477vstumx915wdneazacuoj1o8pmwkxllx9jlr5p65gtzyfdsm8m1wmrrfxsz8v13ryjky716x8cfiaiqdsbfpkxj5krj7sso783fwnz7ucne7u53s95qvfj2b0g6r9hx0rr7k7wvtxslz2fmgdke3qxfnjtvj172pcxp05wtcas9kayh7y1i6p4yurw625h66b5gjo8tdzevj6nmi9g16vf9y6urukyyg4hv3ds17zd5xpi14p1f8h8w04dryc3wb66iefp03j6frbgzzmirtaclpoo8r78o1rfhd1owzb0sm4juw6kkvqw19tvrhx4ecqwywhxsgx1yblufv5m35yi95wl3sg1mbrtozoayhcvy5iui1sol9fxesr75qms1lty6g3hudgouop4uux9zj8h57667vmf5nhcgqtv41k2nwqkpwnax2774uhxe665g48y985z94dfc6m3v8m7hx0xyt10987j1hyyrzvw6nlu4m == \v\2\b\h\3\n\4\q\z\8\0\z\t\k\6\y\n\j\4\7\7\v\s\t\u\m\x\9\1\5\w\d\n\e\a\z\a\c\u\o\j\1\o\8\p\m\w\k\x\l\l\x\9\j\l\r\5\p\6\5\g\t\z\y\f\d\s\m\8\m\1\w\m\r\r\f\x\s\z\8\v\1\3\r\y\j\k\y\7\1\6\x\8\c\f\i\a\i\q\d\s\b\f\p\k\x\j\5\k\r\j\7\s\s\o\7\8\3\f\w\n\z\7\u\c\n\e\7\u\5\3\s\9\5\q\v\f\j\2\b\0\g\6\r\9\h\x\0\r\r\7\k\7\w\v\t\x\s\l\z\2\f\m\g\d\k\e\3\q\x\f\n\j\t\v\j\1\7\2\p\c\x\p\0\5\w\t\c\a\s\9\k\a\y\h\7\y\1\i\6\p\4\y\u\r\w\6\2\5\h\6\6\b\5\g\j\o\8\t\d\z\e\v\j\6\n\m\i\9\g\1\6\v\f\9\y\6\u\r\u\k\y\y\g\4\h\v\3\d\s\1\7\z\d\5\x\p\i\1\4\p\1\f\8\h\8\w\0\4\d\r\y\c\3\w\b\6\6\i\e\f\p\0\3\j\6\f\r\b\g\z\z\m\i\r\t\a\c\l\p\o\o\8\r\7\8\o\1\r\f\h\d\1\o\w\z\b\0\s\m\4\j\u\w\6\k\k\v\q\w\1\9\t\v\r\h\x\4\e\c\q\w\y\w\h\x\s\g\x\1\y\b\l\u\f\v\5\m\3\5\y\i\9\5\w\l\3\s\g\1\m\b\r\t\o\z\o\a\y\h\c\v\y\5\i\u\i\1\s\o\l\9\f\x\e\s\r\7\5\q\m\s\1\l\t\y\6\g\3\h\u\d\g\o\u\o\p\4\u\u\x\9\z\j\8\h\5\7\6\6\7\v\m\f\5\n\h\c\g\q\t\v\4\1\k\2\n\w\q\k\p\w\n\a\x\2\7\7\4\u\h\x\e\6\6\5\g\4\8\y\9\8\5\z\9\4\d\f\c\6\m\3\v\8\m\7\h\x\0\x\y\t\1\0\9\8\7\j\1\h\y\y\r\z\v\w\6\n\l\u\4\m ]] 00:12:13.318 14:38:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:13.318 14:38:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:12:13.318 [2024-11-04 14:38:22.360504] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:13.318 [2024-11-04 14:38:22.360565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59490 ] 00:12:13.598 [2024-11-04 14:38:22.495341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.598 [2024-11-04 14:38:22.533907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.598 [2024-11-04 14:38:22.567251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:13.598  [2024-11-04T14:38:22.738Z] Copying: 512/512 [B] (average 100 kBps) 00:12:13.598 00:12:13.598 14:38:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ v2bh3n4qz80ztk6ynj477vstumx915wdneazacuoj1o8pmwkxllx9jlr5p65gtzyfdsm8m1wmrrfxsz8v13ryjky716x8cfiaiqdsbfpkxj5krj7sso783fwnz7ucne7u53s95qvfj2b0g6r9hx0rr7k7wvtxslz2fmgdke3qxfnjtvj172pcxp05wtcas9kayh7y1i6p4yurw625h66b5gjo8tdzevj6nmi9g16vf9y6urukyyg4hv3ds17zd5xpi14p1f8h8w04dryc3wb66iefp03j6frbgzzmirtaclpoo8r78o1rfhd1owzb0sm4juw6kkvqw19tvrhx4ecqwywhxsgx1yblufv5m35yi95wl3sg1mbrtozoayhcvy5iui1sol9fxesr75qms1lty6g3hudgouop4uux9zj8h57667vmf5nhcgqtv41k2nwqkpwnax2774uhxe665g48y985z94dfc6m3v8m7hx0xyt10987j1hyyrzvw6nlu4m == \v\2\b\h\3\n\4\q\z\8\0\z\t\k\6\y\n\j\4\7\7\v\s\t\u\m\x\9\1\5\w\d\n\e\a\z\a\c\u\o\j\1\o\8\p\m\w\k\x\l\l\x\9\j\l\r\5\p\6\5\g\t\z\y\f\d\s\m\8\m\1\w\m\r\r\f\x\s\z\8\v\1\3\r\y\j\k\y\7\1\6\x\8\c\f\i\a\i\q\d\s\b\f\p\k\x\j\5\k\r\j\7\s\s\o\7\8\3\f\w\n\z\7\u\c\n\e\7\u\5\3\s\9\5\q\v\f\j\2\b\0\g\6\r\9\h\x\0\r\r\7\k\7\w\v\t\x\s\l\z\2\f\m\g\d\k\e\3\q\x\f\n\j\t\v\j\1\7\2\p\c\x\p\0\5\w\t\c\a\s\9\k\a\y\h\7\y\1\i\6\p\4\y\u\r\w\6\2\5\h\6\6\b\5\g\j\o\8\t\d\z\e\v\j\6\n\m\i\9\g\1\6\v\f\9\y\6\u\r\u\k\y\y\g\4\h\v\3\d\s\1\7\z\d\5\x\p\i\1\4\p\1\f\8\h\8\w\0\4\d\r\y\c\3\w\b\6\6\i\e\f\p\0\3\j\6\f\r\b\g\z\z\m\i\r\t\a\c\l\p\o\o\8\r\7\8\o\1\r\f\h\d\1\o\w\z\b\0\s\m\4\j\u\w\6\k\k\v\q\w\1\9\t\v\r\h\x\4\e\c\q\w\y\w\h\x\s\g\x\1\y\b\l\u\f\v\5\m\3\5\y\i\9\5\w\l\3\s\g\1\m\b\r\t\o\z\o\a\y\h\c\v\y\5\i\u\i\1\s\o\l\9\f\x\e\s\r\7\5\q\m\s\1\l\t\y\6\g\3\h\u\d\g\o\u\o\p\4\u\u\x\9\z\j\8\h\5\7\6\6\7\v\m\f\5\n\h\c\g\q\t\v\4\1\k\2\n\w\q\k\p\w\n\a\x\2\7\7\4\u\h\x\e\6\6\5\g\4\8\y\9\8\5\z\9\4\d\f\c\6\m\3\v\8\m\7\h\x\0\x\y\t\1\0\9\8\7\j\1\h\y\y\r\z\v\w\6\n\l\u\4\m ]] 00:12:13.598 14:38:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:13.598 14:38:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:12:13.892 [2024-11-04 14:38:22.739464] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:13.892 [2024-11-04 14:38:22.739538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59494 ] 00:12:13.892 [2024-11-04 14:38:22.879858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.892 [2024-11-04 14:38:22.916912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.892 [2024-11-04 14:38:22.948689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:14.150  [2024-11-04T14:38:23.290Z] Copying: 512/512 [B] (average 6826 Bps) 00:12:14.150 00:12:14.150 14:38:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ v2bh3n4qz80ztk6ynj477vstumx915wdneazacuoj1o8pmwkxllx9jlr5p65gtzyfdsm8m1wmrrfxsz8v13ryjky716x8cfiaiqdsbfpkxj5krj7sso783fwnz7ucne7u53s95qvfj2b0g6r9hx0rr7k7wvtxslz2fmgdke3qxfnjtvj172pcxp05wtcas9kayh7y1i6p4yurw625h66b5gjo8tdzevj6nmi9g16vf9y6urukyyg4hv3ds17zd5xpi14p1f8h8w04dryc3wb66iefp03j6frbgzzmirtaclpoo8r78o1rfhd1owzb0sm4juw6kkvqw19tvrhx4ecqwywhxsgx1yblufv5m35yi95wl3sg1mbrtozoayhcvy5iui1sol9fxesr75qms1lty6g3hudgouop4uux9zj8h57667vmf5nhcgqtv41k2nwqkpwnax2774uhxe665g48y985z94dfc6m3v8m7hx0xyt10987j1hyyrzvw6nlu4m == \v\2\b\h\3\n\4\q\z\8\0\z\t\k\6\y\n\j\4\7\7\v\s\t\u\m\x\9\1\5\w\d\n\e\a\z\a\c\u\o\j\1\o\8\p\m\w\k\x\l\l\x\9\j\l\r\5\p\6\5\g\t\z\y\f\d\s\m\8\m\1\w\m\r\r\f\x\s\z\8\v\1\3\r\y\j\k\y\7\1\6\x\8\c\f\i\a\i\q\d\s\b\f\p\k\x\j\5\k\r\j\7\s\s\o\7\8\3\f\w\n\z\7\u\c\n\e\7\u\5\3\s\9\5\q\v\f\j\2\b\0\g\6\r\9\h\x\0\r\r\7\k\7\w\v\t\x\s\l\z\2\f\m\g\d\k\e\3\q\x\f\n\j\t\v\j\1\7\2\p\c\x\p\0\5\w\t\c\a\s\9\k\a\y\h\7\y\1\i\6\p\4\y\u\r\w\6\2\5\h\6\6\b\5\g\j\o\8\t\d\z\e\v\j\6\n\m\i\9\g\1\6\v\f\9\y\6\u\r\u\k\y\y\g\4\h\v\3\d\s\1\7\z\d\5\x\p\i\1\4\p\1\f\8\h\8\w\0\4\d\r\y\c\3\w\b\6\6\i\e\f\p\0\3\j\6\f\r\b\g\z\z\m\i\r\t\a\c\l\p\o\o\8\r\7\8\o\1\r\f\h\d\1\o\w\z\b\0\s\m\4\j\u\w\6\k\k\v\q\w\1\9\t\v\r\h\x\4\e\c\q\w\y\w\h\x\s\g\x\1\y\b\l\u\f\v\5\m\3\5\y\i\9\5\w\l\3\s\g\1\m\b\r\t\o\z\o\a\y\h\c\v\y\5\i\u\i\1\s\o\l\9\f\x\e\s\r\7\5\q\m\s\1\l\t\y\6\g\3\h\u\d\g\o\u\o\p\4\u\u\x\9\z\j\8\h\5\7\6\6\7\v\m\f\5\n\h\c\g\q\t\v\4\1\k\2\n\w\q\k\p\w\n\a\x\2\7\7\4\u\h\x\e\6\6\5\g\4\8\y\9\8\5\z\9\4\d\f\c\6\m\3\v\8\m\7\h\x\0\x\y\t\1\0\9\8\7\j\1\h\y\y\r\z\v\w\6\n\l\u\4\m ]] 00:12:14.150 14:38:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:12:14.150 14:38:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:12:14.150 14:38:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:12:14.150 14:38:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:12:14.150 14:38:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:14.150 14:38:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:12:14.150 [2024-11-04 14:38:23.187015] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:14.150 [2024-11-04 14:38:23.187075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59504 ] 00:12:14.416 [2024-11-04 14:38:23.324660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.416 [2024-11-04 14:38:23.361162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.416 [2024-11-04 14:38:23.392385] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:14.416  [2024-11-04T14:38:23.556Z] Copying: 512/512 [B] (average 500 kBps) 00:12:14.416 00:12:14.416 14:38:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jm5o9kvbbjql0ircysbk7919b52brs7uul8jidxpiyksvh90391e8cxomriivujo6omhi9q3dngij2empsdfr62hxh6j406yh7ymphzvtsoy1xcypuonblcjmh2nn7jv6u4gc6c6f3lp80hu67mvt2c6r9wg8boigpsmxqsq6ac86g9wtwsi2pp7s35l90urhg72yky4cwekfzlugz3kfeuhgjxvxu83nxue7zfi49xpufkhab7pap47ul5y3hfl4tkma61zx3dgldpr9vv1pox9ml8rsyb19i68ry4vjo16vu8uqqxz9s6nie5e1craqofcm56z5kgrwvoc3ajwq7dasfsw00kucu80j81c9kh3wzdvhnirqayjahhrtwjhfkywqb026oz6ajjs4slzbfamlc5ff83rurrqeqn69gzn3u9focxv2lodabxzkrrc9zyzuvw309w2zytci8gtc8n704hr2816y9q4ekab3gxfv8zpnvpyom0s5gs27k8h == \j\m\5\o\9\k\v\b\b\j\q\l\0\i\r\c\y\s\b\k\7\9\1\9\b\5\2\b\r\s\7\u\u\l\8\j\i\d\x\p\i\y\k\s\v\h\9\0\3\9\1\e\8\c\x\o\m\r\i\i\v\u\j\o\6\o\m\h\i\9\q\3\d\n\g\i\j\2\e\m\p\s\d\f\r\6\2\h\x\h\6\j\4\0\6\y\h\7\y\m\p\h\z\v\t\s\o\y\1\x\c\y\p\u\o\n\b\l\c\j\m\h\2\n\n\7\j\v\6\u\4\g\c\6\c\6\f\3\l\p\8\0\h\u\6\7\m\v\t\2\c\6\r\9\w\g\8\b\o\i\g\p\s\m\x\q\s\q\6\a\c\8\6\g\9\w\t\w\s\i\2\p\p\7\s\3\5\l\9\0\u\r\h\g\7\2\y\k\y\4\c\w\e\k\f\z\l\u\g\z\3\k\f\e\u\h\g\j\x\v\x\u\8\3\n\x\u\e\7\z\f\i\4\9\x\p\u\f\k\h\a\b\7\p\a\p\4\7\u\l\5\y\3\h\f\l\4\t\k\m\a\6\1\z\x\3\d\g\l\d\p\r\9\v\v\1\p\o\x\9\m\l\8\r\s\y\b\1\9\i\6\8\r\y\4\v\j\o\1\6\v\u\8\u\q\q\x\z\9\s\6\n\i\e\5\e\1\c\r\a\q\o\f\c\m\5\6\z\5\k\g\r\w\v\o\c\3\a\j\w\q\7\d\a\s\f\s\w\0\0\k\u\c\u\8\0\j\8\1\c\9\k\h\3\w\z\d\v\h\n\i\r\q\a\y\j\a\h\h\r\t\w\j\h\f\k\y\w\q\b\0\2\6\o\z\6\a\j\j\s\4\s\l\z\b\f\a\m\l\c\5\f\f\8\3\r\u\r\r\q\e\q\n\6\9\g\z\n\3\u\9\f\o\c\x\v\2\l\o\d\a\b\x\z\k\r\r\c\9\z\y\z\u\v\w\3\0\9\w\2\z\y\t\c\i\8\g\t\c\8\n\7\0\4\h\r\2\8\1\6\y\9\q\4\e\k\a\b\3\g\x\f\v\8\z\p\n\v\p\y\o\m\0\s\5\g\s\2\7\k\8\h ]] 00:12:14.416 14:38:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:14.416 14:38:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:12:14.416 [2024-11-04 14:38:23.544617] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:14.416 [2024-11-04 14:38:23.544701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59513 ] 00:12:14.674 [2024-11-04 14:38:23.683263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.674 [2024-11-04 14:38:23.719228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.674 [2024-11-04 14:38:23.750297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:14.674  [2024-11-04T14:38:24.072Z] Copying: 512/512 [B] (average 500 kBps) 00:12:14.932 00:12:14.932 14:38:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jm5o9kvbbjql0ircysbk7919b52brs7uul8jidxpiyksvh90391e8cxomriivujo6omhi9q3dngij2empsdfr62hxh6j406yh7ymphzvtsoy1xcypuonblcjmh2nn7jv6u4gc6c6f3lp80hu67mvt2c6r9wg8boigpsmxqsq6ac86g9wtwsi2pp7s35l90urhg72yky4cwekfzlugz3kfeuhgjxvxu83nxue7zfi49xpufkhab7pap47ul5y3hfl4tkma61zx3dgldpr9vv1pox9ml8rsyb19i68ry4vjo16vu8uqqxz9s6nie5e1craqofcm56z5kgrwvoc3ajwq7dasfsw00kucu80j81c9kh3wzdvhnirqayjahhrtwjhfkywqb026oz6ajjs4slzbfamlc5ff83rurrqeqn69gzn3u9focxv2lodabxzkrrc9zyzuvw309w2zytci8gtc8n704hr2816y9q4ekab3gxfv8zpnvpyom0s5gs27k8h == \j\m\5\o\9\k\v\b\b\j\q\l\0\i\r\c\y\s\b\k\7\9\1\9\b\5\2\b\r\s\7\u\u\l\8\j\i\d\x\p\i\y\k\s\v\h\9\0\3\9\1\e\8\c\x\o\m\r\i\i\v\u\j\o\6\o\m\h\i\9\q\3\d\n\g\i\j\2\e\m\p\s\d\f\r\6\2\h\x\h\6\j\4\0\6\y\h\7\y\m\p\h\z\v\t\s\o\y\1\x\c\y\p\u\o\n\b\l\c\j\m\h\2\n\n\7\j\v\6\u\4\g\c\6\c\6\f\3\l\p\8\0\h\u\6\7\m\v\t\2\c\6\r\9\w\g\8\b\o\i\g\p\s\m\x\q\s\q\6\a\c\8\6\g\9\w\t\w\s\i\2\p\p\7\s\3\5\l\9\0\u\r\h\g\7\2\y\k\y\4\c\w\e\k\f\z\l\u\g\z\3\k\f\e\u\h\g\j\x\v\x\u\8\3\n\x\u\e\7\z\f\i\4\9\x\p\u\f\k\h\a\b\7\p\a\p\4\7\u\l\5\y\3\h\f\l\4\t\k\m\a\6\1\z\x\3\d\g\l\d\p\r\9\v\v\1\p\o\x\9\m\l\8\r\s\y\b\1\9\i\6\8\r\y\4\v\j\o\1\6\v\u\8\u\q\q\x\z\9\s\6\n\i\e\5\e\1\c\r\a\q\o\f\c\m\5\6\z\5\k\g\r\w\v\o\c\3\a\j\w\q\7\d\a\s\f\s\w\0\0\k\u\c\u\8\0\j\8\1\c\9\k\h\3\w\z\d\v\h\n\i\r\q\a\y\j\a\h\h\r\t\w\j\h\f\k\y\w\q\b\0\2\6\o\z\6\a\j\j\s\4\s\l\z\b\f\a\m\l\c\5\f\f\8\3\r\u\r\r\q\e\q\n\6\9\g\z\n\3\u\9\f\o\c\x\v\2\l\o\d\a\b\x\z\k\r\r\c\9\z\y\z\u\v\w\3\0\9\w\2\z\y\t\c\i\8\g\t\c\8\n\7\0\4\h\r\2\8\1\6\y\9\q\4\e\k\a\b\3\g\x\f\v\8\z\p\n\v\p\y\o\m\0\s\5\g\s\2\7\k\8\h ]] 00:12:14.932 14:38:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:14.932 14:38:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:12:14.932 [2024-11-04 14:38:23.917836] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:14.932 [2024-11-04 14:38:23.917904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59517 ] 00:12:14.932 [2024-11-04 14:38:24.057688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.190 [2024-11-04 14:38:24.093904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.190 [2024-11-04 14:38:24.125461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:15.190  [2024-11-04T14:38:24.330Z] Copying: 512/512 [B] (average 500 kBps) 00:12:15.190 00:12:15.190 14:38:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jm5o9kvbbjql0ircysbk7919b52brs7uul8jidxpiyksvh90391e8cxomriivujo6omhi9q3dngij2empsdfr62hxh6j406yh7ymphzvtsoy1xcypuonblcjmh2nn7jv6u4gc6c6f3lp80hu67mvt2c6r9wg8boigpsmxqsq6ac86g9wtwsi2pp7s35l90urhg72yky4cwekfzlugz3kfeuhgjxvxu83nxue7zfi49xpufkhab7pap47ul5y3hfl4tkma61zx3dgldpr9vv1pox9ml8rsyb19i68ry4vjo16vu8uqqxz9s6nie5e1craqofcm56z5kgrwvoc3ajwq7dasfsw00kucu80j81c9kh3wzdvhnirqayjahhrtwjhfkywqb026oz6ajjs4slzbfamlc5ff83rurrqeqn69gzn3u9focxv2lodabxzkrrc9zyzuvw309w2zytci8gtc8n704hr2816y9q4ekab3gxfv8zpnvpyom0s5gs27k8h == \j\m\5\o\9\k\v\b\b\j\q\l\0\i\r\c\y\s\b\k\7\9\1\9\b\5\2\b\r\s\7\u\u\l\8\j\i\d\x\p\i\y\k\s\v\h\9\0\3\9\1\e\8\c\x\o\m\r\i\i\v\u\j\o\6\o\m\h\i\9\q\3\d\n\g\i\j\2\e\m\p\s\d\f\r\6\2\h\x\h\6\j\4\0\6\y\h\7\y\m\p\h\z\v\t\s\o\y\1\x\c\y\p\u\o\n\b\l\c\j\m\h\2\n\n\7\j\v\6\u\4\g\c\6\c\6\f\3\l\p\8\0\h\u\6\7\m\v\t\2\c\6\r\9\w\g\8\b\o\i\g\p\s\m\x\q\s\q\6\a\c\8\6\g\9\w\t\w\s\i\2\p\p\7\s\3\5\l\9\0\u\r\h\g\7\2\y\k\y\4\c\w\e\k\f\z\l\u\g\z\3\k\f\e\u\h\g\j\x\v\x\u\8\3\n\x\u\e\7\z\f\i\4\9\x\p\u\f\k\h\a\b\7\p\a\p\4\7\u\l\5\y\3\h\f\l\4\t\k\m\a\6\1\z\x\3\d\g\l\d\p\r\9\v\v\1\p\o\x\9\m\l\8\r\s\y\b\1\9\i\6\8\r\y\4\v\j\o\1\6\v\u\8\u\q\q\x\z\9\s\6\n\i\e\5\e\1\c\r\a\q\o\f\c\m\5\6\z\5\k\g\r\w\v\o\c\3\a\j\w\q\7\d\a\s\f\s\w\0\0\k\u\c\u\8\0\j\8\1\c\9\k\h\3\w\z\d\v\h\n\i\r\q\a\y\j\a\h\h\r\t\w\j\h\f\k\y\w\q\b\0\2\6\o\z\6\a\j\j\s\4\s\l\z\b\f\a\m\l\c\5\f\f\8\3\r\u\r\r\q\e\q\n\6\9\g\z\n\3\u\9\f\o\c\x\v\2\l\o\d\a\b\x\z\k\r\r\c\9\z\y\z\u\v\w\3\0\9\w\2\z\y\t\c\i\8\g\t\c\8\n\7\0\4\h\r\2\8\1\6\y\9\q\4\e\k\a\b\3\g\x\f\v\8\z\p\n\v\p\y\o\m\0\s\5\g\s\2\7\k\8\h ]] 00:12:15.190 14:38:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:15.190 14:38:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:12:15.190 [2024-11-04 14:38:24.289398] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:15.190 [2024-11-04 14:38:24.289484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59532 ] 00:12:15.448 [2024-11-04 14:38:24.428151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.448 [2024-11-04 14:38:24.464803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.448 [2024-11-04 14:38:24.495986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:15.448  [2024-11-04T14:38:24.846Z] Copying: 512/512 [B] (average 250 kBps) 00:12:15.706 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jm5o9kvbbjql0ircysbk7919b52brs7uul8jidxpiyksvh90391e8cxomriivujo6omhi9q3dngij2empsdfr62hxh6j406yh7ymphzvtsoy1xcypuonblcjmh2nn7jv6u4gc6c6f3lp80hu67mvt2c6r9wg8boigpsmxqsq6ac86g9wtwsi2pp7s35l90urhg72yky4cwekfzlugz3kfeuhgjxvxu83nxue7zfi49xpufkhab7pap47ul5y3hfl4tkma61zx3dgldpr9vv1pox9ml8rsyb19i68ry4vjo16vu8uqqxz9s6nie5e1craqofcm56z5kgrwvoc3ajwq7dasfsw00kucu80j81c9kh3wzdvhnirqayjahhrtwjhfkywqb026oz6ajjs4slzbfamlc5ff83rurrqeqn69gzn3u9focxv2lodabxzkrrc9zyzuvw309w2zytci8gtc8n704hr2816y9q4ekab3gxfv8zpnvpyom0s5gs27k8h == \j\m\5\o\9\k\v\b\b\j\q\l\0\i\r\c\y\s\b\k\7\9\1\9\b\5\2\b\r\s\7\u\u\l\8\j\i\d\x\p\i\y\k\s\v\h\9\0\3\9\1\e\8\c\x\o\m\r\i\i\v\u\j\o\6\o\m\h\i\9\q\3\d\n\g\i\j\2\e\m\p\s\d\f\r\6\2\h\x\h\6\j\4\0\6\y\h\7\y\m\p\h\z\v\t\s\o\y\1\x\c\y\p\u\o\n\b\l\c\j\m\h\2\n\n\7\j\v\6\u\4\g\c\6\c\6\f\3\l\p\8\0\h\u\6\7\m\v\t\2\c\6\r\9\w\g\8\b\o\i\g\p\s\m\x\q\s\q\6\a\c\8\6\g\9\w\t\w\s\i\2\p\p\7\s\3\5\l\9\0\u\r\h\g\7\2\y\k\y\4\c\w\e\k\f\z\l\u\g\z\3\k\f\e\u\h\g\j\x\v\x\u\8\3\n\x\u\e\7\z\f\i\4\9\x\p\u\f\k\h\a\b\7\p\a\p\4\7\u\l\5\y\3\h\f\l\4\t\k\m\a\6\1\z\x\3\d\g\l\d\p\r\9\v\v\1\p\o\x\9\m\l\8\r\s\y\b\1\9\i\6\8\r\y\4\v\j\o\1\6\v\u\8\u\q\q\x\z\9\s\6\n\i\e\5\e\1\c\r\a\q\o\f\c\m\5\6\z\5\k\g\r\w\v\o\c\3\a\j\w\q\7\d\a\s\f\s\w\0\0\k\u\c\u\8\0\j\8\1\c\9\k\h\3\w\z\d\v\h\n\i\r\q\a\y\j\a\h\h\r\t\w\j\h\f\k\y\w\q\b\0\2\6\o\z\6\a\j\j\s\4\s\l\z\b\f\a\m\l\c\5\f\f\8\3\r\u\r\r\q\e\q\n\6\9\g\z\n\3\u\9\f\o\c\x\v\2\l\o\d\a\b\x\z\k\r\r\c\9\z\y\z\u\v\w\3\0\9\w\2\z\y\t\c\i\8\g\t\c\8\n\7\0\4\h\r\2\8\1\6\y\9\q\4\e\k\a\b\3\g\x\f\v\8\z\p\n\v\p\y\o\m\0\s\5\g\s\2\7\k\8\h ]] 00:12:15.706 00:12:15.706 real 0m3.039s 00:12:15.706 user 0m1.544s 00:12:15.706 sys 0m1.299s 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:12:15.706 ************************************ 00:12:15.706 END TEST dd_flags_misc 00:12:15.706 ************************************ 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:12:15.706 * Second test run, disabling liburing, forcing AIO 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:12:15.706 ************************************ 00:12:15.706 START TEST dd_flag_append_forced_aio 00:12:15.706 ************************************ 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1127 -- # append 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=r1net64equv2lov1nkws04wz943ab6n1 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=627vjy2ei3rdon1d8knti8dv5xbp7i87 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s r1net64equv2lov1nkws04wz943ab6n1 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 627vjy2ei3rdon1d8knti8dv5xbp7i87 00:12:15.706 14:38:24 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:12:15.706 [2024-11-04 14:38:24.702413] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:15.706 [2024-11-04 14:38:24.702477] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59555 ] 00:12:15.706 [2024-11-04 14:38:24.838555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.964 [2024-11-04 14:38:24.875058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.964 [2024-11-04 14:38:24.906601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:15.964  [2024-11-04T14:38:25.104Z] Copying: 32/32 [B] (average 31 kBps) 00:12:15.964 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 627vjy2ei3rdon1d8knti8dv5xbp7i87r1net64equv2lov1nkws04wz943ab6n1 == \6\2\7\v\j\y\2\e\i\3\r\d\o\n\1\d\8\k\n\t\i\8\d\v\5\x\b\p\7\i\8\7\r\1\n\e\t\6\4\e\q\u\v\2\l\o\v\1\n\k\w\s\0\4\w\z\9\4\3\a\b\6\n\1 ]] 00:12:15.964 00:12:15.964 real 0m0.392s 00:12:15.964 user 0m0.194s 00:12:15.964 sys 0m0.079s 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:15.964 ************************************ 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:12:15.964 END TEST dd_flag_append_forced_aio 00:12:15.964 ************************************ 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:12:15.964 ************************************ 00:12:15.964 START TEST dd_flag_directory_forced_aio 00:12:15.964 ************************************ 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1127 -- # directory 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:15.964 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:16.222 [2024-11-04 14:38:25.131338] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:16.222 [2024-11-04 14:38:25.131404] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59582 ] 00:12:16.222 [2024-11-04 14:38:25.270399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.222 [2024-11-04 14:38:25.306659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.222 [2024-11-04 14:38:25.337824] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:16.222 [2024-11-04 14:38:25.361064] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:12:16.222 [2024-11-04 14:38:25.361205] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:12:16.222 [2024-11-04 14:38:25.361219] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:16.479 [2024-11-04 14:38:25.419626] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:16.479 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:12:16.479 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:16.480 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:12:16.480 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:12:16.480 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:12:16.480 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:16.480 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:12:16.480 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:12:16.480 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:12:16.480 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.480 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:16.480 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.480 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:16.480 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.480 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:16.480 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.480 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:16.480 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:12:16.480 [2024-11-04 14:38:25.505387] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:16.480 [2024-11-04 14:38:25.505453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59591 ] 00:12:16.738 [2024-11-04 14:38:25.645764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.738 [2024-11-04 14:38:25.681788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.738 [2024-11-04 14:38:25.713403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:16.738 [2024-11-04 14:38:25.737161] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:12:16.738 [2024-11-04 14:38:25.737202] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:12:16.738 [2024-11-04 14:38:25.737213] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:16.738 [2024-11-04 14:38:25.794471] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:16.738 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:12:16.738 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:16.738 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:12:16.738 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:12:16.738 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:12:16.738 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:16.738 ************************************ 00:12:16.738 END TEST dd_flag_directory_forced_aio 00:12:16.738 ************************************ 00:12:16.738 00:12:16.738 real 0m0.741s 00:12:16.738 user 0m0.378s 00:12:16.738 sys 0m0.155s 00:12:16.738 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:16.738 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:12:16.738 14:38:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:12:16.738 14:38:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:16.738 14:38:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:16.738 14:38:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:12:16.738 ************************************ 00:12:16.738 START TEST dd_flag_nofollow_forced_aio 00:12:16.738 ************************************ 00:12:16.738 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1127 -- # nofollow 00:12:16.738 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:12:16.738 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:12:16.738 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:12:16.738 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:12:16.997 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:16.997 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:12:16.997 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:16.997 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.997 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:16.997 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.997 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:16.997 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.997 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:16.997 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.997 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:16.997 14:38:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:16.997 [2024-11-04 14:38:25.917565] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:16.997 [2024-11-04 14:38:25.917639] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59614 ] 00:12:16.997 [2024-11-04 14:38:26.057237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.997 [2024-11-04 14:38:26.092172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.997 [2024-11-04 14:38:26.122123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:17.255 [2024-11-04 14:38:26.144518] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:12:17.255 [2024-11-04 14:38:26.144559] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:12:17.255 [2024-11-04 14:38:26.144573] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:17.255 [2024-11-04 14:38:26.200177] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:17.255 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:12:17.255 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:17.255 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:12:17.255 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:12:17.255 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:12:17.255 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:17.255 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:12:17.255 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:12:17.255 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:12:17.255 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.255 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.255 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.255 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.255 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.255 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.255 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.255 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:17.255 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:12:17.255 [2024-11-04 14:38:26.283465] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:17.255 [2024-11-04 14:38:26.283529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59624 ] 00:12:17.530 [2024-11-04 14:38:26.425302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.530 [2024-11-04 14:38:26.460259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.530 [2024-11-04 14:38:26.490108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:17.530 [2024-11-04 14:38:26.512441] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:12:17.530 [2024-11-04 14:38:26.512480] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:12:17.530 [2024-11-04 14:38:26.512495] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:17.530 [2024-11-04 14:38:26.567924] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:17.530 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:12:17.530 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:17.530 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:12:17.530 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:12:17.530 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:12:17.530 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:17.530 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:12:17.530 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:12:17.530 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:12:17.530 14:38:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:17.530 [2024-11-04 14:38:26.654784] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:17.530 [2024-11-04 14:38:26.654853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59631 ] 00:12:17.814 [2024-11-04 14:38:26.794107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.814 [2024-11-04 14:38:26.829329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.814 [2024-11-04 14:38:26.859860] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:17.814  [2024-11-04T14:38:27.212Z] Copying: 512/512 [B] (average 500 kBps) 00:12:18.073 00:12:18.073 14:38:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 91vcvgz0vply9htk6w9t2cr8bj1vshvh5ar7scam2fx25jc6du1f7igkppi961brke0421lbopgtw3czltgob8iy39r5cmmflriog03elp3o4myaq60ml2w5idppw4mo7zdy5jmoavq0g6xx58fqqpslkjd700gh2fphtxjj9kz3ww7trq6dnz5f5zcb640w8ic3ukh68po7mbvvvctkrdtui5dvujz2cjap24fp0gydyzo09nbw7mz7x4n42e9hmw48u2kxmt3o1voyl3z80p616fn44xofizwauk3koi5nsls25kvjsd1acc16e4kn6vfiin3rn5m55zec71d68og5rtgg7w7837y1bug1i62mwaufw4b9mqa0j4fl0ai5r68zv8wzcehhk8lj9bbxxuzop3g02nz5h4h29yhooabtqbs528ao6qft4a4fvircwgfi9m7o04vdsbsl40oqu1u8pvtbbtvosore71yfhlgsnkoo9az1p8lejvbqrwih == \9\1\v\c\v\g\z\0\v\p\l\y\9\h\t\k\6\w\9\t\2\c\r\8\b\j\1\v\s\h\v\h\5\a\r\7\s\c\a\m\2\f\x\2\5\j\c\6\d\u\1\f\7\i\g\k\p\p\i\9\6\1\b\r\k\e\0\4\2\1\l\b\o\p\g\t\w\3\c\z\l\t\g\o\b\8\i\y\3\9\r\5\c\m\m\f\l\r\i\o\g\0\3\e\l\p\3\o\4\m\y\a\q\6\0\m\l\2\w\5\i\d\p\p\w\4\m\o\7\z\d\y\5\j\m\o\a\v\q\0\g\6\x\x\5\8\f\q\q\p\s\l\k\j\d\7\0\0\g\h\2\f\p\h\t\x\j\j\9\k\z\3\w\w\7\t\r\q\6\d\n\z\5\f\5\z\c\b\6\4\0\w\8\i\c\3\u\k\h\6\8\p\o\7\m\b\v\v\v\c\t\k\r\d\t\u\i\5\d\v\u\j\z\2\c\j\a\p\2\4\f\p\0\g\y\d\y\z\o\0\9\n\b\w\7\m\z\7\x\4\n\4\2\e\9\h\m\w\4\8\u\2\k\x\m\t\3\o\1\v\o\y\l\3\z\8\0\p\6\1\6\f\n\4\4\x\o\f\i\z\w\a\u\k\3\k\o\i\5\n\s\l\s\2\5\k\v\j\s\d\1\a\c\c\1\6\e\4\k\n\6\v\f\i\i\n\3\r\n\5\m\5\5\z\e\c\7\1\d\6\8\o\g\5\r\t\g\g\7\w\7\8\3\7\y\1\b\u\g\1\i\6\2\m\w\a\u\f\w\4\b\9\m\q\a\0\j\4\f\l\0\a\i\5\r\6\8\z\v\8\w\z\c\e\h\h\k\8\l\j\9\b\b\x\x\u\z\o\p\3\g\0\2\n\z\5\h\4\h\2\9\y\h\o\o\a\b\t\q\b\s\5\2\8\a\o\6\q\f\t\4\a\4\f\v\i\r\c\w\g\f\i\9\m\7\o\0\4\v\d\s\b\s\l\4\0\o\q\u\1\u\8\p\v\t\b\b\t\v\o\s\o\r\e\7\1\y\f\h\l\g\s\n\k\o\o\9\a\z\1\p\8\l\e\j\v\b\q\r\w\i\h ]] 00:12:18.073 00:12:18.073 real 0m1.141s 00:12:18.073 user 0m0.553s 00:12:18.073 sys 0m0.250s 00:12:18.073 14:38:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:18.073 14:38:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:12:18.073 ************************************ 00:12:18.073 END TEST dd_flag_nofollow_forced_aio 00:12:18.073 ************************************ 00:12:18.073 14:38:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:12:18.073 14:38:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:18.073 14:38:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:18.073 14:38:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:12:18.073 ************************************ 00:12:18.073 START TEST dd_flag_noatime_forced_aio 00:12:18.073 ************************************ 00:12:18.073 14:38:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1127 -- # noatime 00:12:18.073 14:38:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:12:18.073 14:38:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:12:18.073 14:38:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:12:18.073 14:38:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:12:18.073 14:38:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:12:18.073 14:38:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:18.073 14:38:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1730731106 00:12:18.073 14:38:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:18.073 14:38:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1730731107 00:12:18.073 14:38:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:12:19.006 14:38:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:19.006 [2024-11-04 14:38:28.107403] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:19.007 [2024-11-04 14:38:28.107472] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59672 ] 00:12:19.264 [2024-11-04 14:38:28.248014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.264 [2024-11-04 14:38:28.283131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.264 [2024-11-04 14:38:28.313939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:19.264  [2024-11-04T14:38:28.673Z] Copying: 512/512 [B] (average 500 kBps) 00:12:19.533 00:12:19.533 14:38:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:19.533 14:38:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1730731106 )) 00:12:19.533 14:38:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:19.533 14:38:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1730731107 )) 00:12:19.533 14:38:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:19.533 [2024-11-04 14:38:28.499052] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:19.533 [2024-11-04 14:38:28.499114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59683 ] 00:12:19.533 [2024-11-04 14:38:28.640584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.791 [2024-11-04 14:38:28.676823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.791 [2024-11-04 14:38:28.707217] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:19.791  [2024-11-04T14:38:28.931Z] Copying: 512/512 [B] (average 500 kBps) 00:12:19.791 00:12:19.791 14:38:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:19.791 14:38:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1730731108 )) 00:12:19.791 00:12:19.791 real 0m1.801s 00:12:19.791 user 0m0.391s 00:12:19.791 sys 0m0.172s 00:12:19.791 14:38:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:19.791 ************************************ 00:12:19.791 END TEST dd_flag_noatime_forced_aio 00:12:19.791 ************************************ 00:12:19.791 14:38:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:12:19.791 14:38:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:12:19.791 14:38:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:19.791 14:38:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:19.791 14:38:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:12:19.791 ************************************ 00:12:19.791 START TEST dd_flags_misc_forced_aio 00:12:19.791 ************************************ 00:12:19.791 14:38:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1127 -- # io 00:12:19.791 14:38:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:12:19.791 14:38:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:12:19.791 14:38:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:12:19.791 14:38:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:12:19.791 14:38:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:12:19.791 14:38:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:12:19.791 14:38:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:12:19.791 14:38:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:19.791 14:38:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:12:20.050 [2024-11-04 14:38:28.933400] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:20.050 [2024-11-04 14:38:28.933465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59704 ] 00:12:20.050 [2024-11-04 14:38:29.073929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.050 [2024-11-04 14:38:29.109662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.050 [2024-11-04 14:38:29.139822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:20.050  [2024-11-04T14:38:29.448Z] Copying: 512/512 [B] (average 500 kBps) 00:12:20.308 00:12:20.308 14:38:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6zlahqorvw2p65dja2a13j4md3ntbxodpr0w87a8tou1sxhmc15p91enkvh6osqfx4nr376xsb4wickemojknlj28rg73u4cvbph6o7el6rokg49zuzqh9q3be4zbql1eyo28vl2balwp6u26xzmvrrsoyjgwjwjvojz2v1wgb86eswfbuae2asra0fmuy98nekdl0lg6ges9qzeklcc1zfyjyqx66r6wugj336x2ox0tk2divsmdc7sah14ukl4sy4eqc58mbxxawsc0nmgozgsg3ppbnn8wkkkto46wy24pkducb0fs5ovp2bskyww3j3blkvmf9kv2yrbp7pnrqaih0zfawy1hfset0x4cpj6v3wm4pna1llq7gjyr2a91ak0j45vq8ix0qci8emqjd2fb1u17235atbxjwgaw3sdiexmic2dhcfjlj6a6txms0jycmpxxpbws204p5ygkpn6pw7gd137bbe52zu4vbsyt05e4juiy3saji5lvxmk == \6\z\l\a\h\q\o\r\v\w\2\p\6\5\d\j\a\2\a\1\3\j\4\m\d\3\n\t\b\x\o\d\p\r\0\w\8\7\a\8\t\o\u\1\s\x\h\m\c\1\5\p\9\1\e\n\k\v\h\6\o\s\q\f\x\4\n\r\3\7\6\x\s\b\4\w\i\c\k\e\m\o\j\k\n\l\j\2\8\r\g\7\3\u\4\c\v\b\p\h\6\o\7\e\l\6\r\o\k\g\4\9\z\u\z\q\h\9\q\3\b\e\4\z\b\q\l\1\e\y\o\2\8\v\l\2\b\a\l\w\p\6\u\2\6\x\z\m\v\r\r\s\o\y\j\g\w\j\w\j\v\o\j\z\2\v\1\w\g\b\8\6\e\s\w\f\b\u\a\e\2\a\s\r\a\0\f\m\u\y\9\8\n\e\k\d\l\0\l\g\6\g\e\s\9\q\z\e\k\l\c\c\1\z\f\y\j\y\q\x\6\6\r\6\w\u\g\j\3\3\6\x\2\o\x\0\t\k\2\d\i\v\s\m\d\c\7\s\a\h\1\4\u\k\l\4\s\y\4\e\q\c\5\8\m\b\x\x\a\w\s\c\0\n\m\g\o\z\g\s\g\3\p\p\b\n\n\8\w\k\k\k\t\o\4\6\w\y\2\4\p\k\d\u\c\b\0\f\s\5\o\v\p\2\b\s\k\y\w\w\3\j\3\b\l\k\v\m\f\9\k\v\2\y\r\b\p\7\p\n\r\q\a\i\h\0\z\f\a\w\y\1\h\f\s\e\t\0\x\4\c\p\j\6\v\3\w\m\4\p\n\a\1\l\l\q\7\g\j\y\r\2\a\9\1\a\k\0\j\4\5\v\q\8\i\x\0\q\c\i\8\e\m\q\j\d\2\f\b\1\u\1\7\2\3\5\a\t\b\x\j\w\g\a\w\3\s\d\i\e\x\m\i\c\2\d\h\c\f\j\l\j\6\a\6\t\x\m\s\0\j\y\c\m\p\x\x\p\b\w\s\2\0\4\p\5\y\g\k\p\n\6\p\w\7\g\d\1\3\7\b\b\e\5\2\z\u\4\v\b\s\y\t\0\5\e\4\j\u\i\y\3\s\a\j\i\5\l\v\x\m\k ]] 00:12:20.308 14:38:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:20.308 14:38:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:12:20.308 [2024-11-04 14:38:29.309911] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:20.308 [2024-11-04 14:38:29.309977] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59712 ] 00:12:20.308 [2024-11-04 14:38:29.446781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.565 [2024-11-04 14:38:29.482597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.565 [2024-11-04 14:38:29.513224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:20.565  [2024-11-04T14:38:29.705Z] Copying: 512/512 [B] (average 500 kBps) 00:12:20.565 00:12:20.565 14:38:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6zlahqorvw2p65dja2a13j4md3ntbxodpr0w87a8tou1sxhmc15p91enkvh6osqfx4nr376xsb4wickemojknlj28rg73u4cvbph6o7el6rokg49zuzqh9q3be4zbql1eyo28vl2balwp6u26xzmvrrsoyjgwjwjvojz2v1wgb86eswfbuae2asra0fmuy98nekdl0lg6ges9qzeklcc1zfyjyqx66r6wugj336x2ox0tk2divsmdc7sah14ukl4sy4eqc58mbxxawsc0nmgozgsg3ppbnn8wkkkto46wy24pkducb0fs5ovp2bskyww3j3blkvmf9kv2yrbp7pnrqaih0zfawy1hfset0x4cpj6v3wm4pna1llq7gjyr2a91ak0j45vq8ix0qci8emqjd2fb1u17235atbxjwgaw3sdiexmic2dhcfjlj6a6txms0jycmpxxpbws204p5ygkpn6pw7gd137bbe52zu4vbsyt05e4juiy3saji5lvxmk == \6\z\l\a\h\q\o\r\v\w\2\p\6\5\d\j\a\2\a\1\3\j\4\m\d\3\n\t\b\x\o\d\p\r\0\w\8\7\a\8\t\o\u\1\s\x\h\m\c\1\5\p\9\1\e\n\k\v\h\6\o\s\q\f\x\4\n\r\3\7\6\x\s\b\4\w\i\c\k\e\m\o\j\k\n\l\j\2\8\r\g\7\3\u\4\c\v\b\p\h\6\o\7\e\l\6\r\o\k\g\4\9\z\u\z\q\h\9\q\3\b\e\4\z\b\q\l\1\e\y\o\2\8\v\l\2\b\a\l\w\p\6\u\2\6\x\z\m\v\r\r\s\o\y\j\g\w\j\w\j\v\o\j\z\2\v\1\w\g\b\8\6\e\s\w\f\b\u\a\e\2\a\s\r\a\0\f\m\u\y\9\8\n\e\k\d\l\0\l\g\6\g\e\s\9\q\z\e\k\l\c\c\1\z\f\y\j\y\q\x\6\6\r\6\w\u\g\j\3\3\6\x\2\o\x\0\t\k\2\d\i\v\s\m\d\c\7\s\a\h\1\4\u\k\l\4\s\y\4\e\q\c\5\8\m\b\x\x\a\w\s\c\0\n\m\g\o\z\g\s\g\3\p\p\b\n\n\8\w\k\k\k\t\o\4\6\w\y\2\4\p\k\d\u\c\b\0\f\s\5\o\v\p\2\b\s\k\y\w\w\3\j\3\b\l\k\v\m\f\9\k\v\2\y\r\b\p\7\p\n\r\q\a\i\h\0\z\f\a\w\y\1\h\f\s\e\t\0\x\4\c\p\j\6\v\3\w\m\4\p\n\a\1\l\l\q\7\g\j\y\r\2\a\9\1\a\k\0\j\4\5\v\q\8\i\x\0\q\c\i\8\e\m\q\j\d\2\f\b\1\u\1\7\2\3\5\a\t\b\x\j\w\g\a\w\3\s\d\i\e\x\m\i\c\2\d\h\c\f\j\l\j\6\a\6\t\x\m\s\0\j\y\c\m\p\x\x\p\b\w\s\2\0\4\p\5\y\g\k\p\n\6\p\w\7\g\d\1\3\7\b\b\e\5\2\z\u\4\v\b\s\y\t\0\5\e\4\j\u\i\y\3\s\a\j\i\5\l\v\x\m\k ]] 00:12:20.565 14:38:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:20.565 14:38:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:12:20.566 [2024-11-04 14:38:29.683158] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:20.566 [2024-11-04 14:38:29.683223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59719 ] 00:12:20.823 [2024-11-04 14:38:29.818173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.823 [2024-11-04 14:38:29.858696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.823 [2024-11-04 14:38:29.891565] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:20.823  [2024-11-04T14:38:30.220Z] Copying: 512/512 [B] (average 100 kBps) 00:12:21.080 00:12:21.081 14:38:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6zlahqorvw2p65dja2a13j4md3ntbxodpr0w87a8tou1sxhmc15p91enkvh6osqfx4nr376xsb4wickemojknlj28rg73u4cvbph6o7el6rokg49zuzqh9q3be4zbql1eyo28vl2balwp6u26xzmvrrsoyjgwjwjvojz2v1wgb86eswfbuae2asra0fmuy98nekdl0lg6ges9qzeklcc1zfyjyqx66r6wugj336x2ox0tk2divsmdc7sah14ukl4sy4eqc58mbxxawsc0nmgozgsg3ppbnn8wkkkto46wy24pkducb0fs5ovp2bskyww3j3blkvmf9kv2yrbp7pnrqaih0zfawy1hfset0x4cpj6v3wm4pna1llq7gjyr2a91ak0j45vq8ix0qci8emqjd2fb1u17235atbxjwgaw3sdiexmic2dhcfjlj6a6txms0jycmpxxpbws204p5ygkpn6pw7gd137bbe52zu4vbsyt05e4juiy3saji5lvxmk == \6\z\l\a\h\q\o\r\v\w\2\p\6\5\d\j\a\2\a\1\3\j\4\m\d\3\n\t\b\x\o\d\p\r\0\w\8\7\a\8\t\o\u\1\s\x\h\m\c\1\5\p\9\1\e\n\k\v\h\6\o\s\q\f\x\4\n\r\3\7\6\x\s\b\4\w\i\c\k\e\m\o\j\k\n\l\j\2\8\r\g\7\3\u\4\c\v\b\p\h\6\o\7\e\l\6\r\o\k\g\4\9\z\u\z\q\h\9\q\3\b\e\4\z\b\q\l\1\e\y\o\2\8\v\l\2\b\a\l\w\p\6\u\2\6\x\z\m\v\r\r\s\o\y\j\g\w\j\w\j\v\o\j\z\2\v\1\w\g\b\8\6\e\s\w\f\b\u\a\e\2\a\s\r\a\0\f\m\u\y\9\8\n\e\k\d\l\0\l\g\6\g\e\s\9\q\z\e\k\l\c\c\1\z\f\y\j\y\q\x\6\6\r\6\w\u\g\j\3\3\6\x\2\o\x\0\t\k\2\d\i\v\s\m\d\c\7\s\a\h\1\4\u\k\l\4\s\y\4\e\q\c\5\8\m\b\x\x\a\w\s\c\0\n\m\g\o\z\g\s\g\3\p\p\b\n\n\8\w\k\k\k\t\o\4\6\w\y\2\4\p\k\d\u\c\b\0\f\s\5\o\v\p\2\b\s\k\y\w\w\3\j\3\b\l\k\v\m\f\9\k\v\2\y\r\b\p\7\p\n\r\q\a\i\h\0\z\f\a\w\y\1\h\f\s\e\t\0\x\4\c\p\j\6\v\3\w\m\4\p\n\a\1\l\l\q\7\g\j\y\r\2\a\9\1\a\k\0\j\4\5\v\q\8\i\x\0\q\c\i\8\e\m\q\j\d\2\f\b\1\u\1\7\2\3\5\a\t\b\x\j\w\g\a\w\3\s\d\i\e\x\m\i\c\2\d\h\c\f\j\l\j\6\a\6\t\x\m\s\0\j\y\c\m\p\x\x\p\b\w\s\2\0\4\p\5\y\g\k\p\n\6\p\w\7\g\d\1\3\7\b\b\e\5\2\z\u\4\v\b\s\y\t\0\5\e\4\j\u\i\y\3\s\a\j\i\5\l\v\x\m\k ]] 00:12:21.081 14:38:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:21.081 14:38:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:12:21.081 [2024-11-04 14:38:30.077707] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:21.081 [2024-11-04 14:38:30.077765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59727 ] 00:12:21.081 [2024-11-04 14:38:30.214892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.338 [2024-11-04 14:38:30.251280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.338 [2024-11-04 14:38:30.281918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:21.339  [2024-11-04T14:38:30.479Z] Copying: 512/512 [B] (average 500 kBps) 00:12:21.339 00:12:21.339 14:38:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6zlahqorvw2p65dja2a13j4md3ntbxodpr0w87a8tou1sxhmc15p91enkvh6osqfx4nr376xsb4wickemojknlj28rg73u4cvbph6o7el6rokg49zuzqh9q3be4zbql1eyo28vl2balwp6u26xzmvrrsoyjgwjwjvojz2v1wgb86eswfbuae2asra0fmuy98nekdl0lg6ges9qzeklcc1zfyjyqx66r6wugj336x2ox0tk2divsmdc7sah14ukl4sy4eqc58mbxxawsc0nmgozgsg3ppbnn8wkkkto46wy24pkducb0fs5ovp2bskyww3j3blkvmf9kv2yrbp7pnrqaih0zfawy1hfset0x4cpj6v3wm4pna1llq7gjyr2a91ak0j45vq8ix0qci8emqjd2fb1u17235atbxjwgaw3sdiexmic2dhcfjlj6a6txms0jycmpxxpbws204p5ygkpn6pw7gd137bbe52zu4vbsyt05e4juiy3saji5lvxmk == \6\z\l\a\h\q\o\r\v\w\2\p\6\5\d\j\a\2\a\1\3\j\4\m\d\3\n\t\b\x\o\d\p\r\0\w\8\7\a\8\t\o\u\1\s\x\h\m\c\1\5\p\9\1\e\n\k\v\h\6\o\s\q\f\x\4\n\r\3\7\6\x\s\b\4\w\i\c\k\e\m\o\j\k\n\l\j\2\8\r\g\7\3\u\4\c\v\b\p\h\6\o\7\e\l\6\r\o\k\g\4\9\z\u\z\q\h\9\q\3\b\e\4\z\b\q\l\1\e\y\o\2\8\v\l\2\b\a\l\w\p\6\u\2\6\x\z\m\v\r\r\s\o\y\j\g\w\j\w\j\v\o\j\z\2\v\1\w\g\b\8\6\e\s\w\f\b\u\a\e\2\a\s\r\a\0\f\m\u\y\9\8\n\e\k\d\l\0\l\g\6\g\e\s\9\q\z\e\k\l\c\c\1\z\f\y\j\y\q\x\6\6\r\6\w\u\g\j\3\3\6\x\2\o\x\0\t\k\2\d\i\v\s\m\d\c\7\s\a\h\1\4\u\k\l\4\s\y\4\e\q\c\5\8\m\b\x\x\a\w\s\c\0\n\m\g\o\z\g\s\g\3\p\p\b\n\n\8\w\k\k\k\t\o\4\6\w\y\2\4\p\k\d\u\c\b\0\f\s\5\o\v\p\2\b\s\k\y\w\w\3\j\3\b\l\k\v\m\f\9\k\v\2\y\r\b\p\7\p\n\r\q\a\i\h\0\z\f\a\w\y\1\h\f\s\e\t\0\x\4\c\p\j\6\v\3\w\m\4\p\n\a\1\l\l\q\7\g\j\y\r\2\a\9\1\a\k\0\j\4\5\v\q\8\i\x\0\q\c\i\8\e\m\q\j\d\2\f\b\1\u\1\7\2\3\5\a\t\b\x\j\w\g\a\w\3\s\d\i\e\x\m\i\c\2\d\h\c\f\j\l\j\6\a\6\t\x\m\s\0\j\y\c\m\p\x\x\p\b\w\s\2\0\4\p\5\y\g\k\p\n\6\p\w\7\g\d\1\3\7\b\b\e\5\2\z\u\4\v\b\s\y\t\0\5\e\4\j\u\i\y\3\s\a\j\i\5\l\v\x\m\k ]] 00:12:21.339 14:38:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:12:21.339 14:38:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:12:21.339 14:38:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:12:21.339 14:38:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:12:21.339 14:38:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:21.339 14:38:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:12:21.339 [2024-11-04 14:38:30.470408] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:21.339 [2024-11-04 14:38:30.470474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59733 ] 00:12:21.597 [2024-11-04 14:38:30.610267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.597 [2024-11-04 14:38:30.646922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.597 [2024-11-04 14:38:30.678053] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:21.597  [2024-11-04T14:38:30.995Z] Copying: 512/512 [B] (average 500 kBps) 00:12:21.855 00:12:21.855 14:38:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 53ek3wj8pty31zj8eav41sc65kqccym4ok67b9u916uudztost7d1soahb79k1vx4qke5xqhdvadd0f481qa771xjeqqu5xsr6731tt6nl0oobsrhrhl3r6yxf8sqpnkqr9oeck89ipu39qosaehn61qv1f7b893oe53bysy1rkxpima62hvflkmmjlyk5zs7rogtep2ntkcr7jvdivl1yadj2me0ap1jq1mbvj4coqn7u4dto3ze4i3pxziks0o2uu1mn0eby3tpkouj79wlmbxqyg7g7xe50ay7x36b556eou37ax9jbejvxtpwmmbx6u3chqldaau3mn4d96y9vyjkiz4kpax6tg5u9xvvj8zynnjr408poijzm9xs89a7vmwwnw1a2930e7rlxryvobyr4u7gjo7z39rxmadvkcabireit83zkwmohn5j9bu3hi2lrbamyzram020vfue08ed1k9luh0m8u5gtyzgdutcdw312p43qxlunfatseg == \5\3\e\k\3\w\j\8\p\t\y\3\1\z\j\8\e\a\v\4\1\s\c\6\5\k\q\c\c\y\m\4\o\k\6\7\b\9\u\9\1\6\u\u\d\z\t\o\s\t\7\d\1\s\o\a\h\b\7\9\k\1\v\x\4\q\k\e\5\x\q\h\d\v\a\d\d\0\f\4\8\1\q\a\7\7\1\x\j\e\q\q\u\5\x\s\r\6\7\3\1\t\t\6\n\l\0\o\o\b\s\r\h\r\h\l\3\r\6\y\x\f\8\s\q\p\n\k\q\r\9\o\e\c\k\8\9\i\p\u\3\9\q\o\s\a\e\h\n\6\1\q\v\1\f\7\b\8\9\3\o\e\5\3\b\y\s\y\1\r\k\x\p\i\m\a\6\2\h\v\f\l\k\m\m\j\l\y\k\5\z\s\7\r\o\g\t\e\p\2\n\t\k\c\r\7\j\v\d\i\v\l\1\y\a\d\j\2\m\e\0\a\p\1\j\q\1\m\b\v\j\4\c\o\q\n\7\u\4\d\t\o\3\z\e\4\i\3\p\x\z\i\k\s\0\o\2\u\u\1\m\n\0\e\b\y\3\t\p\k\o\u\j\7\9\w\l\m\b\x\q\y\g\7\g\7\x\e\5\0\a\y\7\x\3\6\b\5\5\6\e\o\u\3\7\a\x\9\j\b\e\j\v\x\t\p\w\m\m\b\x\6\u\3\c\h\q\l\d\a\a\u\3\m\n\4\d\9\6\y\9\v\y\j\k\i\z\4\k\p\a\x\6\t\g\5\u\9\x\v\v\j\8\z\y\n\n\j\r\4\0\8\p\o\i\j\z\m\9\x\s\8\9\a\7\v\m\w\w\n\w\1\a\2\9\3\0\e\7\r\l\x\r\y\v\o\b\y\r\4\u\7\g\j\o\7\z\3\9\r\x\m\a\d\v\k\c\a\b\i\r\e\i\t\8\3\z\k\w\m\o\h\n\5\j\9\b\u\3\h\i\2\l\r\b\a\m\y\z\r\a\m\0\2\0\v\f\u\e\0\8\e\d\1\k\9\l\u\h\0\m\8\u\5\g\t\y\z\g\d\u\t\c\d\w\3\1\2\p\4\3\q\x\l\u\n\f\a\t\s\e\g ]] 00:12:21.855 14:38:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:21.855 14:38:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:12:21.855 [2024-11-04 14:38:30.861363] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:21.855 [2024-11-04 14:38:30.861453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59736 ] 00:12:22.113 [2024-11-04 14:38:31.006572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.113 [2024-11-04 14:38:31.065296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.113 [2024-11-04 14:38:31.110074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:22.113  [2024-11-04T14:38:31.512Z] Copying: 512/512 [B] (average 500 kBps) 00:12:22.372 00:12:22.372 14:38:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 53ek3wj8pty31zj8eav41sc65kqccym4ok67b9u916uudztost7d1soahb79k1vx4qke5xqhdvadd0f481qa771xjeqqu5xsr6731tt6nl0oobsrhrhl3r6yxf8sqpnkqr9oeck89ipu39qosaehn61qv1f7b893oe53bysy1rkxpima62hvflkmmjlyk5zs7rogtep2ntkcr7jvdivl1yadj2me0ap1jq1mbvj4coqn7u4dto3ze4i3pxziks0o2uu1mn0eby3tpkouj79wlmbxqyg7g7xe50ay7x36b556eou37ax9jbejvxtpwmmbx6u3chqldaau3mn4d96y9vyjkiz4kpax6tg5u9xvvj8zynnjr408poijzm9xs89a7vmwwnw1a2930e7rlxryvobyr4u7gjo7z39rxmadvkcabireit83zkwmohn5j9bu3hi2lrbamyzram020vfue08ed1k9luh0m8u5gtyzgdutcdw312p43qxlunfatseg == \5\3\e\k\3\w\j\8\p\t\y\3\1\z\j\8\e\a\v\4\1\s\c\6\5\k\q\c\c\y\m\4\o\k\6\7\b\9\u\9\1\6\u\u\d\z\t\o\s\t\7\d\1\s\o\a\h\b\7\9\k\1\v\x\4\q\k\e\5\x\q\h\d\v\a\d\d\0\f\4\8\1\q\a\7\7\1\x\j\e\q\q\u\5\x\s\r\6\7\3\1\t\t\6\n\l\0\o\o\b\s\r\h\r\h\l\3\r\6\y\x\f\8\s\q\p\n\k\q\r\9\o\e\c\k\8\9\i\p\u\3\9\q\o\s\a\e\h\n\6\1\q\v\1\f\7\b\8\9\3\o\e\5\3\b\y\s\y\1\r\k\x\p\i\m\a\6\2\h\v\f\l\k\m\m\j\l\y\k\5\z\s\7\r\o\g\t\e\p\2\n\t\k\c\r\7\j\v\d\i\v\l\1\y\a\d\j\2\m\e\0\a\p\1\j\q\1\m\b\v\j\4\c\o\q\n\7\u\4\d\t\o\3\z\e\4\i\3\p\x\z\i\k\s\0\o\2\u\u\1\m\n\0\e\b\y\3\t\p\k\o\u\j\7\9\w\l\m\b\x\q\y\g\7\g\7\x\e\5\0\a\y\7\x\3\6\b\5\5\6\e\o\u\3\7\a\x\9\j\b\e\j\v\x\t\p\w\m\m\b\x\6\u\3\c\h\q\l\d\a\a\u\3\m\n\4\d\9\6\y\9\v\y\j\k\i\z\4\k\p\a\x\6\t\g\5\u\9\x\v\v\j\8\z\y\n\n\j\r\4\0\8\p\o\i\j\z\m\9\x\s\8\9\a\7\v\m\w\w\n\w\1\a\2\9\3\0\e\7\r\l\x\r\y\v\o\b\y\r\4\u\7\g\j\o\7\z\3\9\r\x\m\a\d\v\k\c\a\b\i\r\e\i\t\8\3\z\k\w\m\o\h\n\5\j\9\b\u\3\h\i\2\l\r\b\a\m\y\z\r\a\m\0\2\0\v\f\u\e\0\8\e\d\1\k\9\l\u\h\0\m\8\u\5\g\t\y\z\g\d\u\t\c\d\w\3\1\2\p\4\3\q\x\l\u\n\f\a\t\s\e\g ]] 00:12:22.372 14:38:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:22.372 14:38:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:12:22.372 [2024-11-04 14:38:31.302320] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:22.372 [2024-11-04 14:38:31.302380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59744 ] 00:12:22.372 [2024-11-04 14:38:31.438155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.372 [2024-11-04 14:38:31.477938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.372 [2024-11-04 14:38:31.510951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:22.630  [2024-11-04T14:38:31.770Z] Copying: 512/512 [B] (average 166 kBps) 00:12:22.630 00:12:22.631 14:38:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 53ek3wj8pty31zj8eav41sc65kqccym4ok67b9u916uudztost7d1soahb79k1vx4qke5xqhdvadd0f481qa771xjeqqu5xsr6731tt6nl0oobsrhrhl3r6yxf8sqpnkqr9oeck89ipu39qosaehn61qv1f7b893oe53bysy1rkxpima62hvflkmmjlyk5zs7rogtep2ntkcr7jvdivl1yadj2me0ap1jq1mbvj4coqn7u4dto3ze4i3pxziks0o2uu1mn0eby3tpkouj79wlmbxqyg7g7xe50ay7x36b556eou37ax9jbejvxtpwmmbx6u3chqldaau3mn4d96y9vyjkiz4kpax6tg5u9xvvj8zynnjr408poijzm9xs89a7vmwwnw1a2930e7rlxryvobyr4u7gjo7z39rxmadvkcabireit83zkwmohn5j9bu3hi2lrbamyzram020vfue08ed1k9luh0m8u5gtyzgdutcdw312p43qxlunfatseg == \5\3\e\k\3\w\j\8\p\t\y\3\1\z\j\8\e\a\v\4\1\s\c\6\5\k\q\c\c\y\m\4\o\k\6\7\b\9\u\9\1\6\u\u\d\z\t\o\s\t\7\d\1\s\o\a\h\b\7\9\k\1\v\x\4\q\k\e\5\x\q\h\d\v\a\d\d\0\f\4\8\1\q\a\7\7\1\x\j\e\q\q\u\5\x\s\r\6\7\3\1\t\t\6\n\l\0\o\o\b\s\r\h\r\h\l\3\r\6\y\x\f\8\s\q\p\n\k\q\r\9\o\e\c\k\8\9\i\p\u\3\9\q\o\s\a\e\h\n\6\1\q\v\1\f\7\b\8\9\3\o\e\5\3\b\y\s\y\1\r\k\x\p\i\m\a\6\2\h\v\f\l\k\m\m\j\l\y\k\5\z\s\7\r\o\g\t\e\p\2\n\t\k\c\r\7\j\v\d\i\v\l\1\y\a\d\j\2\m\e\0\a\p\1\j\q\1\m\b\v\j\4\c\o\q\n\7\u\4\d\t\o\3\z\e\4\i\3\p\x\z\i\k\s\0\o\2\u\u\1\m\n\0\e\b\y\3\t\p\k\o\u\j\7\9\w\l\m\b\x\q\y\g\7\g\7\x\e\5\0\a\y\7\x\3\6\b\5\5\6\e\o\u\3\7\a\x\9\j\b\e\j\v\x\t\p\w\m\m\b\x\6\u\3\c\h\q\l\d\a\a\u\3\m\n\4\d\9\6\y\9\v\y\j\k\i\z\4\k\p\a\x\6\t\g\5\u\9\x\v\v\j\8\z\y\n\n\j\r\4\0\8\p\o\i\j\z\m\9\x\s\8\9\a\7\v\m\w\w\n\w\1\a\2\9\3\0\e\7\r\l\x\r\y\v\o\b\y\r\4\u\7\g\j\o\7\z\3\9\r\x\m\a\d\v\k\c\a\b\i\r\e\i\t\8\3\z\k\w\m\o\h\n\5\j\9\b\u\3\h\i\2\l\r\b\a\m\y\z\r\a\m\0\2\0\v\f\u\e\0\8\e\d\1\k\9\l\u\h\0\m\8\u\5\g\t\y\z\g\d\u\t\c\d\w\3\1\2\p\4\3\q\x\l\u\n\f\a\t\s\e\g ]] 00:12:22.631 14:38:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:22.631 14:38:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:12:22.631 [2024-11-04 14:38:31.695483] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:22.631 [2024-11-04 14:38:31.695545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59751 ] 00:12:22.889 [2024-11-04 14:38:31.833536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.889 [2024-11-04 14:38:31.869335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.889 [2024-11-04 14:38:31.899963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:22.889  [2024-11-04T14:38:32.290Z] Copying: 512/512 [B] (average 125 kBps) 00:12:23.150 00:12:23.151 14:38:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 53ek3wj8pty31zj8eav41sc65kqccym4ok67b9u916uudztost7d1soahb79k1vx4qke5xqhdvadd0f481qa771xjeqqu5xsr6731tt6nl0oobsrhrhl3r6yxf8sqpnkqr9oeck89ipu39qosaehn61qv1f7b893oe53bysy1rkxpima62hvflkmmjlyk5zs7rogtep2ntkcr7jvdivl1yadj2me0ap1jq1mbvj4coqn7u4dto3ze4i3pxziks0o2uu1mn0eby3tpkouj79wlmbxqyg7g7xe50ay7x36b556eou37ax9jbejvxtpwmmbx6u3chqldaau3mn4d96y9vyjkiz4kpax6tg5u9xvvj8zynnjr408poijzm9xs89a7vmwwnw1a2930e7rlxryvobyr4u7gjo7z39rxmadvkcabireit83zkwmohn5j9bu3hi2lrbamyzram020vfue08ed1k9luh0m8u5gtyzgdutcdw312p43qxlunfatseg == \5\3\e\k\3\w\j\8\p\t\y\3\1\z\j\8\e\a\v\4\1\s\c\6\5\k\q\c\c\y\m\4\o\k\6\7\b\9\u\9\1\6\u\u\d\z\t\o\s\t\7\d\1\s\o\a\h\b\7\9\k\1\v\x\4\q\k\e\5\x\q\h\d\v\a\d\d\0\f\4\8\1\q\a\7\7\1\x\j\e\q\q\u\5\x\s\r\6\7\3\1\t\t\6\n\l\0\o\o\b\s\r\h\r\h\l\3\r\6\y\x\f\8\s\q\p\n\k\q\r\9\o\e\c\k\8\9\i\p\u\3\9\q\o\s\a\e\h\n\6\1\q\v\1\f\7\b\8\9\3\o\e\5\3\b\y\s\y\1\r\k\x\p\i\m\a\6\2\h\v\f\l\k\m\m\j\l\y\k\5\z\s\7\r\o\g\t\e\p\2\n\t\k\c\r\7\j\v\d\i\v\l\1\y\a\d\j\2\m\e\0\a\p\1\j\q\1\m\b\v\j\4\c\o\q\n\7\u\4\d\t\o\3\z\e\4\i\3\p\x\z\i\k\s\0\o\2\u\u\1\m\n\0\e\b\y\3\t\p\k\o\u\j\7\9\w\l\m\b\x\q\y\g\7\g\7\x\e\5\0\a\y\7\x\3\6\b\5\5\6\e\o\u\3\7\a\x\9\j\b\e\j\v\x\t\p\w\m\m\b\x\6\u\3\c\h\q\l\d\a\a\u\3\m\n\4\d\9\6\y\9\v\y\j\k\i\z\4\k\p\a\x\6\t\g\5\u\9\x\v\v\j\8\z\y\n\n\j\r\4\0\8\p\o\i\j\z\m\9\x\s\8\9\a\7\v\m\w\w\n\w\1\a\2\9\3\0\e\7\r\l\x\r\y\v\o\b\y\r\4\u\7\g\j\o\7\z\3\9\r\x\m\a\d\v\k\c\a\b\i\r\e\i\t\8\3\z\k\w\m\o\h\n\5\j\9\b\u\3\h\i\2\l\r\b\a\m\y\z\r\a\m\0\2\0\v\f\u\e\0\8\e\d\1\k\9\l\u\h\0\m\8\u\5\g\t\y\z\g\d\u\t\c\d\w\3\1\2\p\4\3\q\x\l\u\n\f\a\t\s\e\g ]] 00:12:23.151 00:12:23.151 real 0m3.154s 00:12:23.151 user 0m1.519s 00:12:23.151 sys 0m0.671s 00:12:23.151 14:38:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:23.151 14:38:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:12:23.151 ************************************ 00:12:23.151 END TEST dd_flags_misc_forced_aio 00:12:23.151 ************************************ 00:12:23.151 14:38:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:12:23.151 14:38:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:12:23.151 14:38:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:12:23.151 00:12:23.151 real 0m14.781s 00:12:23.151 user 0m6.273s 00:12:23.151 sys 0m3.891s 00:12:23.151 14:38:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:23.151 14:38:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:12:23.151 ************************************ 00:12:23.151 END TEST spdk_dd_posix 00:12:23.151 ************************************ 00:12:23.151 14:38:32 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:12:23.151 14:38:32 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:23.151 14:38:32 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:23.151 14:38:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:23.151 ************************************ 00:12:23.151 START TEST spdk_dd_malloc 00:12:23.151 ************************************ 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:12:23.151 * Looking for test storage... 00:12:23.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:23.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.151 --rc genhtml_branch_coverage=1 00:12:23.151 --rc genhtml_function_coverage=1 00:12:23.151 --rc genhtml_legend=1 00:12:23.151 --rc geninfo_all_blocks=1 00:12:23.151 --rc geninfo_unexecuted_blocks=1 00:12:23.151 00:12:23.151 ' 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:23.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.151 --rc genhtml_branch_coverage=1 00:12:23.151 --rc genhtml_function_coverage=1 00:12:23.151 --rc genhtml_legend=1 00:12:23.151 --rc geninfo_all_blocks=1 00:12:23.151 --rc geninfo_unexecuted_blocks=1 00:12:23.151 00:12:23.151 ' 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:23.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.151 --rc genhtml_branch_coverage=1 00:12:23.151 --rc genhtml_function_coverage=1 00:12:23.151 --rc genhtml_legend=1 00:12:23.151 --rc geninfo_all_blocks=1 00:12:23.151 --rc geninfo_unexecuted_blocks=1 00:12:23.151 00:12:23.151 ' 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:23.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.151 --rc genhtml_branch_coverage=1 00:12:23.151 --rc genhtml_function_coverage=1 00:12:23.151 --rc genhtml_legend=1 00:12:23.151 --rc geninfo_all_blocks=1 00:12:23.151 --rc geninfo_unexecuted_blocks=1 00:12:23.151 00:12:23.151 ' 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:12:23.151 ************************************ 00:12:23.151 START TEST dd_malloc_copy 00:12:23.151 ************************************ 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1127 -- # malloc_copy 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:23.151 14:38:32 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:12:23.412 [2024-11-04 14:38:32.306425] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:23.412 [2024-11-04 14:38:32.306487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59828 ] 00:12:23.412 { 00:12:23.412 "subsystems": [ 00:12:23.412 { 00:12:23.412 "subsystem": "bdev", 00:12:23.412 "config": [ 00:12:23.412 { 00:12:23.412 "params": { 00:12:23.412 "block_size": 512, 00:12:23.412 "num_blocks": 1048576, 00:12:23.412 "name": "malloc0" 00:12:23.412 }, 00:12:23.412 "method": "bdev_malloc_create" 00:12:23.412 }, 00:12:23.412 { 00:12:23.412 "params": { 00:12:23.412 "block_size": 512, 00:12:23.412 "num_blocks": 1048576, 00:12:23.412 "name": "malloc1" 00:12:23.412 }, 00:12:23.412 "method": "bdev_malloc_create" 00:12:23.412 }, 00:12:23.412 { 00:12:23.412 "method": "bdev_wait_for_examine" 00:12:23.412 } 00:12:23.412 ] 00:12:23.412 } 00:12:23.412 ] 00:12:23.412 } 00:12:23.412 [2024-11-04 14:38:32.445618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.412 [2024-11-04 14:38:32.480644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.412 [2024-11-04 14:38:32.510904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:24.785  [2024-11-04T14:38:34.899Z] Copying: 203/512 [MB] (203 MBps) [2024-11-04T14:38:35.468Z] Copying: 410/512 [MB] (207 MBps) [2024-11-04T14:38:35.727Z] Copying: 512/512 [MB] (average 205 MBps) 00:12:26.587 00:12:26.587 14:38:35 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:12:26.587 14:38:35 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:12:26.587 14:38:35 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:26.587 14:38:35 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:12:26.587 [2024-11-04 14:38:35.523204] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:26.587 [2024-11-04 14:38:35.523272] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59870 ] 00:12:26.587 { 00:12:26.587 "subsystems": [ 00:12:26.587 { 00:12:26.587 "subsystem": "bdev", 00:12:26.587 "config": [ 00:12:26.587 { 00:12:26.587 "params": { 00:12:26.587 "block_size": 512, 00:12:26.587 "num_blocks": 1048576, 00:12:26.587 "name": "malloc0" 00:12:26.587 }, 00:12:26.587 "method": "bdev_malloc_create" 00:12:26.587 }, 00:12:26.587 { 00:12:26.587 "params": { 00:12:26.587 "block_size": 512, 00:12:26.587 "num_blocks": 1048576, 00:12:26.587 "name": "malloc1" 00:12:26.587 }, 00:12:26.587 "method": "bdev_malloc_create" 00:12:26.587 }, 00:12:26.587 { 00:12:26.587 "method": "bdev_wait_for_examine" 00:12:26.587 } 00:12:26.587 ] 00:12:26.587 } 00:12:26.587 ] 00:12:26.587 } 00:12:26.587 [2024-11-04 14:38:35.660673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.587 [2024-11-04 14:38:35.696039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.845 [2024-11-04 14:38:35.727407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:28.218  [2024-11-04T14:38:38.291Z] Copying: 207/512 [MB] (207 MBps) [2024-11-04T14:38:38.549Z] Copying: 416/512 [MB] (208 MBps) [2024-11-04T14:38:38.807Z] Copying: 512/512 [MB] (average 208 MBps) 00:12:29.667 00:12:29.667 00:12:29.667 real 0m6.384s 00:12:29.667 user 0m5.724s 00:12:29.667 sys 0m0.469s 00:12:29.667 14:38:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:29.667 14:38:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:12:29.667 ************************************ 00:12:29.667 END TEST dd_malloc_copy 00:12:29.667 ************************************ 00:12:29.667 ************************************ 00:12:29.667 END TEST spdk_dd_malloc 00:12:29.667 ************************************ 00:12:29.667 00:12:29.667 real 0m6.579s 00:12:29.667 user 0m5.823s 00:12:29.667 sys 0m0.556s 00:12:29.667 14:38:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:29.667 14:38:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:12:29.667 14:38:38 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:12:29.667 14:38:38 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:29.667 14:38:38 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:29.667 14:38:38 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:29.667 ************************************ 00:12:29.667 START TEST spdk_dd_bdev_to_bdev 00:12:29.667 ************************************ 00:12:29.667 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:12:29.667 * Looking for test storage... 00:12:29.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:29.667 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:29.667 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lcov --version 00:12:29.667 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:29.925 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:29.925 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:29.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.926 --rc genhtml_branch_coverage=1 00:12:29.926 --rc genhtml_function_coverage=1 00:12:29.926 --rc genhtml_legend=1 00:12:29.926 --rc geninfo_all_blocks=1 00:12:29.926 --rc geninfo_unexecuted_blocks=1 00:12:29.926 00:12:29.926 ' 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:29.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.926 --rc genhtml_branch_coverage=1 00:12:29.926 --rc genhtml_function_coverage=1 00:12:29.926 --rc genhtml_legend=1 00:12:29.926 --rc geninfo_all_blocks=1 00:12:29.926 --rc geninfo_unexecuted_blocks=1 00:12:29.926 00:12:29.926 ' 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:29.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.926 --rc genhtml_branch_coverage=1 00:12:29.926 --rc genhtml_function_coverage=1 00:12:29.926 --rc genhtml_legend=1 00:12:29.926 --rc geninfo_all_blocks=1 00:12:29.926 --rc geninfo_unexecuted_blocks=1 00:12:29.926 00:12:29.926 ' 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:29.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.926 --rc genhtml_branch_coverage=1 00:12:29.926 --rc genhtml_function_coverage=1 00:12:29.926 --rc genhtml_legend=1 00:12:29.926 --rc geninfo_all_blocks=1 00:12:29.926 --rc geninfo_unexecuted_blocks=1 00:12:29.926 00:12:29.926 ' 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:12:29.926 ************************************ 00:12:29.926 START TEST dd_inflate_file 00:12:29.926 ************************************ 00:12:29.926 14:38:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:12:29.926 [2024-11-04 14:38:38.901344] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:29.926 [2024-11-04 14:38:38.901573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59982 ] 00:12:29.926 [2024-11-04 14:38:39.030799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.926 [2024-11-04 14:38:39.061467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.185 [2024-11-04 14:38:39.089579] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:30.185  [2024-11-04T14:38:39.325Z] Copying: 64/64 [MB] (average 2133 MBps) 00:12:30.185 00:12:30.185 00:12:30.185 real 0m0.356s 00:12:30.185 user 0m0.178s 00:12:30.185 sys 0m0.171s 00:12:30.185 14:38:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:30.185 14:38:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:12:30.185 ************************************ 00:12:30.185 END TEST dd_inflate_file 00:12:30.185 ************************************ 00:12:30.185 14:38:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:12:30.185 14:38:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:12:30.185 14:38:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:12:30.185 14:38:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:12:30.185 14:38:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:30.185 14:38:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:12:30.185 14:38:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:12:30.185 14:38:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:12:30.185 14:38:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:12:30.185 ************************************ 00:12:30.185 START TEST dd_copy_to_out_bdev 00:12:30.185 ************************************ 00:12:30.185 14:38:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:12:30.185 [2024-11-04 14:38:39.302771] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:30.185 [2024-11-04 14:38:39.302834] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60014 ] 00:12:30.185 { 00:12:30.185 "subsystems": [ 00:12:30.185 { 00:12:30.185 "subsystem": "bdev", 00:12:30.185 "config": [ 00:12:30.185 { 00:12:30.185 "params": { 00:12:30.185 "trtype": "pcie", 00:12:30.185 "traddr": "0000:00:10.0", 00:12:30.185 "name": "Nvme0" 00:12:30.185 }, 00:12:30.185 "method": "bdev_nvme_attach_controller" 00:12:30.185 }, 00:12:30.185 { 00:12:30.185 "params": { 00:12:30.185 "trtype": "pcie", 00:12:30.185 "traddr": "0000:00:11.0", 00:12:30.185 "name": "Nvme1" 00:12:30.185 }, 00:12:30.185 "method": "bdev_nvme_attach_controller" 00:12:30.185 }, 00:12:30.185 { 00:12:30.185 "method": "bdev_wait_for_examine" 00:12:30.185 } 00:12:30.185 ] 00:12:30.185 } 00:12:30.185 ] 00:12:30.185 } 00:12:30.443 [2024-11-04 14:38:39.437823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.443 [2024-11-04 14:38:39.468590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.443 [2024-11-04 14:38:39.497917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:31.375  [2024-11-04T14:38:40.773Z] Copying: 64/64 [MB] (average 91 MBps) 00:12:31.633 00:12:31.633 00:12:31.633 real 0m1.300s 00:12:31.633 user 0m1.085s 00:12:31.633 sys 0m1.034s 00:12:31.633 14:38:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:31.633 ************************************ 00:12:31.633 14:38:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:12:31.633 END TEST dd_copy_to_out_bdev 00:12:31.633 ************************************ 00:12:31.633 14:38:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:12:31.633 14:38:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:12:31.633 14:38:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:31.633 14:38:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:31.633 14:38:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:12:31.633 ************************************ 00:12:31.633 START TEST dd_offset_magic 00:12:31.633 ************************************ 00:12:31.633 14:38:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1127 -- # offset_magic 00:12:31.633 14:38:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:12:31.633 14:38:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:12:31.633 14:38:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:12:31.633 14:38:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:12:31.633 14:38:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:12:31.633 14:38:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:12:31.633 14:38:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:12:31.633 14:38:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:12:31.633 [2024-11-04 14:38:40.649518] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:31.633 [2024-11-04 14:38:40.649692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60050 ] 00:12:31.633 { 00:12:31.633 "subsystems": [ 00:12:31.633 { 00:12:31.633 "subsystem": "bdev", 00:12:31.633 "config": [ 00:12:31.633 { 00:12:31.633 "params": { 00:12:31.633 "trtype": "pcie", 00:12:31.633 "traddr": "0000:00:10.0", 00:12:31.633 "name": "Nvme0" 00:12:31.633 }, 00:12:31.633 "method": "bdev_nvme_attach_controller" 00:12:31.633 }, 00:12:31.633 { 00:12:31.633 "params": { 00:12:31.633 "trtype": "pcie", 00:12:31.633 "traddr": "0000:00:11.0", 00:12:31.633 "name": "Nvme1" 00:12:31.633 }, 00:12:31.633 "method": "bdev_nvme_attach_controller" 00:12:31.633 }, 00:12:31.633 { 00:12:31.633 "method": "bdev_wait_for_examine" 00:12:31.633 } 00:12:31.633 ] 00:12:31.633 } 00:12:31.633 ] 00:12:31.633 } 00:12:31.985 [2024-11-04 14:38:40.789216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.985 [2024-11-04 14:38:40.825234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.985 [2024-11-04 14:38:40.855647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:31.985  [2024-11-04T14:38:41.384Z] Copying: 65/65 [MB] (average 1140 MBps) 00:12:32.244 00:12:32.244 14:38:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:12:32.244 14:38:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:12:32.244 14:38:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:12:32.244 14:38:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:12:32.244 [2024-11-04 14:38:41.340723] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:32.244 [2024-11-04 14:38:41.340898] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60070 ] 00:12:32.244 { 00:12:32.244 "subsystems": [ 00:12:32.244 { 00:12:32.244 "subsystem": "bdev", 00:12:32.244 "config": [ 00:12:32.244 { 00:12:32.244 "params": { 00:12:32.244 "trtype": "pcie", 00:12:32.244 "traddr": "0000:00:10.0", 00:12:32.244 "name": "Nvme0" 00:12:32.244 }, 00:12:32.244 "method": "bdev_nvme_attach_controller" 00:12:32.244 }, 00:12:32.244 { 00:12:32.244 "params": { 00:12:32.244 "trtype": "pcie", 00:12:32.244 "traddr": "0000:00:11.0", 00:12:32.244 "name": "Nvme1" 00:12:32.244 }, 00:12:32.244 "method": "bdev_nvme_attach_controller" 00:12:32.244 }, 00:12:32.244 { 00:12:32.244 "method": "bdev_wait_for_examine" 00:12:32.244 } 00:12:32.244 ] 00:12:32.244 } 00:12:32.244 ] 00:12:32.244 } 00:12:32.503 [2024-11-04 14:38:41.481852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.503 [2024-11-04 14:38:41.517352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.503 [2024-11-04 14:38:41.547937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:32.761  [2024-11-04T14:38:41.901Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:12:32.761 00:12:32.761 14:38:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:12:32.761 14:38:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:12:32.761 14:38:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:12:32.761 14:38:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:12:32.761 14:38:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:12:32.761 14:38:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:12:32.761 14:38:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:12:32.761 [2024-11-04 14:38:41.848197] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:32.761 [2024-11-04 14:38:41.848355] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60081 ] 00:12:32.761 { 00:12:32.761 "subsystems": [ 00:12:32.761 { 00:12:32.761 "subsystem": "bdev", 00:12:32.761 "config": [ 00:12:32.761 { 00:12:32.761 "params": { 00:12:32.761 "trtype": "pcie", 00:12:32.761 "traddr": "0000:00:10.0", 00:12:32.761 "name": "Nvme0" 00:12:32.761 }, 00:12:32.761 "method": "bdev_nvme_attach_controller" 00:12:32.761 }, 00:12:32.761 { 00:12:32.761 "params": { 00:12:32.761 "trtype": "pcie", 00:12:32.761 "traddr": "0000:00:11.0", 00:12:32.761 "name": "Nvme1" 00:12:32.761 }, 00:12:32.761 "method": "bdev_nvme_attach_controller" 00:12:32.761 }, 00:12:32.761 { 00:12:32.761 "method": "bdev_wait_for_examine" 00:12:32.761 } 00:12:32.761 ] 00:12:32.761 } 00:12:32.761 ] 00:12:32.761 } 00:12:33.019 [2024-11-04 14:38:41.988782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.019 [2024-11-04 14:38:42.025202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.019 [2024-11-04 14:38:42.055956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:33.277  [2024-11-04T14:38:42.675Z] Copying: 65/65 [MB] (average 1274 MBps) 00:12:33.535 00:12:33.535 14:38:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:12:33.535 14:38:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:12:33.535 14:38:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:12:33.535 14:38:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:12:33.535 [2024-11-04 14:38:42.462749] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:33.535 [2024-11-04 14:38:42.462808] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60101 ] 00:12:33.535 { 00:12:33.535 "subsystems": [ 00:12:33.535 { 00:12:33.535 "subsystem": "bdev", 00:12:33.535 "config": [ 00:12:33.535 { 00:12:33.535 "params": { 00:12:33.535 "trtype": "pcie", 00:12:33.535 "traddr": "0000:00:10.0", 00:12:33.535 "name": "Nvme0" 00:12:33.535 }, 00:12:33.535 "method": "bdev_nvme_attach_controller" 00:12:33.535 }, 00:12:33.535 { 00:12:33.535 "params": { 00:12:33.535 "trtype": "pcie", 00:12:33.535 "traddr": "0000:00:11.0", 00:12:33.535 "name": "Nvme1" 00:12:33.535 }, 00:12:33.535 "method": "bdev_nvme_attach_controller" 00:12:33.535 }, 00:12:33.535 { 00:12:33.535 "method": "bdev_wait_for_examine" 00:12:33.535 } 00:12:33.535 ] 00:12:33.535 } 00:12:33.535 ] 00:12:33.535 } 00:12:33.535 [2024-11-04 14:38:42.602285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.535 [2024-11-04 14:38:42.637058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.535 [2024-11-04 14:38:42.667955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:33.793  [2024-11-04T14:38:42.933Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:12:33.793 00:12:34.052 ************************************ 00:12:34.052 END TEST dd_offset_magic 00:12:34.052 ************************************ 00:12:34.052 14:38:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:12:34.052 14:38:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:12:34.052 00:12:34.052 real 0m2.321s 00:12:34.052 user 0m1.648s 00:12:34.052 sys 0m0.566s 00:12:34.052 14:38:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:34.052 14:38:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:12:34.052 14:38:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:12:34.052 14:38:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:12:34.052 14:38:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:12:34.052 14:38:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:12:34.052 14:38:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:12:34.052 14:38:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:12:34.052 14:38:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:12:34.052 14:38:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:12:34.052 14:38:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:12:34.052 14:38:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:12:34.052 14:38:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:12:34.052 [2024-11-04 14:38:42.999222] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:34.052 [2024-11-04 14:38:42.999284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60132 ] 00:12:34.052 { 00:12:34.052 "subsystems": [ 00:12:34.052 { 00:12:34.052 "subsystem": "bdev", 00:12:34.052 "config": [ 00:12:34.052 { 00:12:34.052 "params": { 00:12:34.052 "trtype": "pcie", 00:12:34.052 "traddr": "0000:00:10.0", 00:12:34.052 "name": "Nvme0" 00:12:34.052 }, 00:12:34.052 "method": "bdev_nvme_attach_controller" 00:12:34.052 }, 00:12:34.052 { 00:12:34.052 "params": { 00:12:34.052 "trtype": "pcie", 00:12:34.052 "traddr": "0000:00:11.0", 00:12:34.052 "name": "Nvme1" 00:12:34.052 }, 00:12:34.052 "method": "bdev_nvme_attach_controller" 00:12:34.052 }, 00:12:34.052 { 00:12:34.052 "method": "bdev_wait_for_examine" 00:12:34.052 } 00:12:34.052 ] 00:12:34.052 } 00:12:34.052 ] 00:12:34.052 } 00:12:34.052 [2024-11-04 14:38:43.141353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.052 [2024-11-04 14:38:43.177114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.310 [2024-11-04 14:38:43.207269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:34.310  [2024-11-04T14:38:43.708Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:12:34.568 00:12:34.568 14:38:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:12:34.568 14:38:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:12:34.568 14:38:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:12:34.568 14:38:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:12:34.568 14:38:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:12:34.568 14:38:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:12:34.568 14:38:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:12:34.568 14:38:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:12:34.568 14:38:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:12:34.568 14:38:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:12:34.568 [2024-11-04 14:38:43.508702] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:34.568 [2024-11-04 14:38:43.508768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60148 ] 00:12:34.568 { 00:12:34.568 "subsystems": [ 00:12:34.568 { 00:12:34.568 "subsystem": "bdev", 00:12:34.568 "config": [ 00:12:34.568 { 00:12:34.568 "params": { 00:12:34.568 "trtype": "pcie", 00:12:34.568 "traddr": "0000:00:10.0", 00:12:34.568 "name": "Nvme0" 00:12:34.568 }, 00:12:34.568 "method": "bdev_nvme_attach_controller" 00:12:34.568 }, 00:12:34.568 { 00:12:34.568 "params": { 00:12:34.568 "trtype": "pcie", 00:12:34.568 "traddr": "0000:00:11.0", 00:12:34.568 "name": "Nvme1" 00:12:34.568 }, 00:12:34.568 "method": "bdev_nvme_attach_controller" 00:12:34.568 }, 00:12:34.568 { 00:12:34.568 "method": "bdev_wait_for_examine" 00:12:34.568 } 00:12:34.568 ] 00:12:34.568 } 00:12:34.568 ] 00:12:34.568 } 00:12:34.568 [2024-11-04 14:38:43.639320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.568 [2024-11-04 14:38:43.679253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.826 [2024-11-04 14:38:43.711006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:34.826  [2024-11-04T14:38:44.225Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:12:35.085 00:12:35.085 14:38:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:12:35.085 00:12:35.085 real 0m5.296s 00:12:35.085 user 0m3.752s 00:12:35.085 sys 0m2.250s 00:12:35.085 14:38:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:35.085 14:38:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:12:35.085 ************************************ 00:12:35.085 END TEST spdk_dd_bdev_to_bdev 00:12:35.085 ************************************ 00:12:35.085 14:38:44 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:12:35.085 14:38:44 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:12:35.085 14:38:44 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:35.085 14:38:44 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:35.085 14:38:44 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:35.085 ************************************ 00:12:35.085 START TEST spdk_dd_uring 00:12:35.085 ************************************ 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:12:35.085 * Looking for test storage... 00:12:35.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lcov --version 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:35.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.085 --rc genhtml_branch_coverage=1 00:12:35.085 --rc genhtml_function_coverage=1 00:12:35.085 --rc genhtml_legend=1 00:12:35.085 --rc geninfo_all_blocks=1 00:12:35.085 --rc geninfo_unexecuted_blocks=1 00:12:35.085 00:12:35.085 ' 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:35.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.085 --rc genhtml_branch_coverage=1 00:12:35.085 --rc genhtml_function_coverage=1 00:12:35.085 --rc genhtml_legend=1 00:12:35.085 --rc geninfo_all_blocks=1 00:12:35.085 --rc geninfo_unexecuted_blocks=1 00:12:35.085 00:12:35.085 ' 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:35.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.085 --rc genhtml_branch_coverage=1 00:12:35.085 --rc genhtml_function_coverage=1 00:12:35.085 --rc genhtml_legend=1 00:12:35.085 --rc geninfo_all_blocks=1 00:12:35.085 --rc geninfo_unexecuted_blocks=1 00:12:35.085 00:12:35.085 ' 00:12:35.085 14:38:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:35.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.086 --rc genhtml_branch_coverage=1 00:12:35.086 --rc genhtml_function_coverage=1 00:12:35.086 --rc genhtml_legend=1 00:12:35.086 --rc geninfo_all_blocks=1 00:12:35.086 --rc geninfo_unexecuted_blocks=1 00:12:35.086 00:12:35.086 ' 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:12:35.086 ************************************ 00:12:35.086 START TEST dd_uring_copy 00:12:35.086 ************************************ 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1127 -- # uring_zram_copy 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:12:35.086 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:12:35.344 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:12:35.344 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:12:35.344 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:35.344 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=252xneqea3j4su1msqc5jy3drgw65vcrkptci9h0ezo2m7uzqb7sgbksc0i98l7mw1nj0v0y9ebrwbd3oab35vsgv7ow8egswptkks3nq18dwkuqym4r3279u4i9kwettiuxrh3dgp4duccy846l9yfq53k1zo381tw811n0gu10iryvqrdjqzdeldgxwuff0o9u93falsoojg1yutf8zbx237aexrunq2fpud73e0vfhep7mfcb5supjf18nh75kuktxfeae5x866ppnbw9vw5hqs7ysktgiwu5qwkdgntxq3711mo8irc5fag7e0c0n5d0khqrtcfprqiasrkn0vw5vq31bbd5o3ca7y2p8gabp9q1h9cjbpyqg49fplms7bbjbl0ko0chh6a9016mnflskvjn0v3ihrcy7esedthrht3j69wu4xh2qkjvqru2y63qtowwp40pln1xv4ypztqpy5jucdnvzdh6uxpp78wnvxq2m282ihz3w2odtbaq0w6nd98cm6v66e2jlgm84yxlbjysa3poiioz4cin2zur8zx7yt3f68bvt0lek8go2mcfhy4s92nbe64tq7tmty564zb9ve3z5cnx5pyyfc3f7ka6sm14qici2ak2v6kjz7ye7a6z5bzg12uvi1d4w3d9ajw9m87neyoykqhu8twu4icgihwgz68r4cnpi0kjoksifo7kn9us59jnp2kzzlxdpbp09hpkrzwjhq33lwul3lt8g42z9oyup4ul9vnsxznby7sx455pg5i4jo54cjun8ykc1kl2cm1o3g2fkrh9ksf0ek2ius7rew4rvrsb69h8intdhmsob259hu6oca2rj55n1x18lpt66cdkt5y3d5379m68vcw78bys5w2pkflrfaiqza9t1ny90rw8pmox66f9ol8u6b44zajlwh3375sv4lseegkj33az3kzhktw5p4luac89j4d7d6pdzlm8zgkl5q69i8uekmlq3to3phqzrc6teryqs0ilgncs 00:12:35.344 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 252xneqea3j4su1msqc5jy3drgw65vcrkptci9h0ezo2m7uzqb7sgbksc0i98l7mw1nj0v0y9ebrwbd3oab35vsgv7ow8egswptkks3nq18dwkuqym4r3279u4i9kwettiuxrh3dgp4duccy846l9yfq53k1zo381tw811n0gu10iryvqrdjqzdeldgxwuff0o9u93falsoojg1yutf8zbx237aexrunq2fpud73e0vfhep7mfcb5supjf18nh75kuktxfeae5x866ppnbw9vw5hqs7ysktgiwu5qwkdgntxq3711mo8irc5fag7e0c0n5d0khqrtcfprqiasrkn0vw5vq31bbd5o3ca7y2p8gabp9q1h9cjbpyqg49fplms7bbjbl0ko0chh6a9016mnflskvjn0v3ihrcy7esedthrht3j69wu4xh2qkjvqru2y63qtowwp40pln1xv4ypztqpy5jucdnvzdh6uxpp78wnvxq2m282ihz3w2odtbaq0w6nd98cm6v66e2jlgm84yxlbjysa3poiioz4cin2zur8zx7yt3f68bvt0lek8go2mcfhy4s92nbe64tq7tmty564zb9ve3z5cnx5pyyfc3f7ka6sm14qici2ak2v6kjz7ye7a6z5bzg12uvi1d4w3d9ajw9m87neyoykqhu8twu4icgihwgz68r4cnpi0kjoksifo7kn9us59jnp2kzzlxdpbp09hpkrzwjhq33lwul3lt8g42z9oyup4ul9vnsxznby7sx455pg5i4jo54cjun8ykc1kl2cm1o3g2fkrh9ksf0ek2ius7rew4rvrsb69h8intdhmsob259hu6oca2rj55n1x18lpt66cdkt5y3d5379m68vcw78bys5w2pkflrfaiqza9t1ny90rw8pmox66f9ol8u6b44zajlwh3375sv4lseegkj33az3kzhktw5p4luac89j4d7d6pdzlm8zgkl5q69i8uekmlq3to3phqzrc6teryqs0ilgncs 00:12:35.344 14:38:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:12:35.344 [2024-11-04 14:38:44.277764] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:35.345 [2024-11-04 14:38:44.277829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60220 ] 00:12:35.345 [2024-11-04 14:38:44.417717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.345 [2024-11-04 14:38:44.454126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.602 [2024-11-04 14:38:44.484536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:35.860  [2024-11-04T14:38:45.258Z] Copying: 511/511 [MB] (average 2039 MBps) 00:12:36.118 00:12:36.118 14:38:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:12:36.118 14:38:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:12:36.119 14:38:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:36.119 14:38:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:36.119 [2024-11-04 14:38:45.109517] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:36.119 [2024-11-04 14:38:45.109581] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60236 ] 00:12:36.119 { 00:12:36.119 "subsystems": [ 00:12:36.119 { 00:12:36.119 "subsystem": "bdev", 00:12:36.119 "config": [ 00:12:36.119 { 00:12:36.119 "params": { 00:12:36.119 "block_size": 512, 00:12:36.119 "num_blocks": 1048576, 00:12:36.119 "name": "malloc0" 00:12:36.119 }, 00:12:36.119 "method": "bdev_malloc_create" 00:12:36.119 }, 00:12:36.119 { 00:12:36.119 "params": { 00:12:36.119 "filename": "/dev/zram1", 00:12:36.119 "name": "uring0" 00:12:36.119 }, 00:12:36.119 "method": "bdev_uring_create" 00:12:36.119 }, 00:12:36.119 { 00:12:36.119 "method": "bdev_wait_for_examine" 00:12:36.119 } 00:12:36.119 ] 00:12:36.119 } 00:12:36.119 ] 00:12:36.119 } 00:12:36.119 [2024-11-04 14:38:45.249840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.376 [2024-11-04 14:38:45.286173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.376 [2024-11-04 14:38:45.318258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:37.312  [2024-11-04T14:38:47.386Z] Copying: 268/512 [MB] (268 MBps) [2024-11-04T14:38:47.644Z] Copying: 512/512 [MB] (average 269 MBps) 00:12:38.504 00:12:38.504 14:38:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:12:38.504 14:38:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:12:38.504 14:38:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:38.504 14:38:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:38.504 [2024-11-04 14:38:47.577732] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:38.504 [2024-11-04 14:38:47.577796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60275 ] 00:12:38.504 { 00:12:38.504 "subsystems": [ 00:12:38.504 { 00:12:38.504 "subsystem": "bdev", 00:12:38.504 "config": [ 00:12:38.504 { 00:12:38.504 "params": { 00:12:38.504 "block_size": 512, 00:12:38.504 "num_blocks": 1048576, 00:12:38.504 "name": "malloc0" 00:12:38.504 }, 00:12:38.504 "method": "bdev_malloc_create" 00:12:38.504 }, 00:12:38.504 { 00:12:38.504 "params": { 00:12:38.504 "filename": "/dev/zram1", 00:12:38.504 "name": "uring0" 00:12:38.504 }, 00:12:38.504 "method": "bdev_uring_create" 00:12:38.504 }, 00:12:38.504 { 00:12:38.504 "method": "bdev_wait_for_examine" 00:12:38.504 } 00:12:38.504 ] 00:12:38.504 } 00:12:38.504 ] 00:12:38.504 } 00:12:38.762 [2024-11-04 14:38:47.717304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.762 [2024-11-04 14:38:47.752826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.762 [2024-11-04 14:38:47.783596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:40.134  [2024-11-04T14:38:50.206Z] Copying: 200/512 [MB] (200 MBps) [2024-11-04T14:38:50.463Z] Copying: 395/512 [MB] (194 MBps) [2024-11-04T14:38:50.721Z] Copying: 512/512 [MB] (average 205 MBps) 00:12:41.581 00:12:41.581 14:38:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:12:41.581 14:38:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 252xneqea3j4su1msqc5jy3drgw65vcrkptci9h0ezo2m7uzqb7sgbksc0i98l7mw1nj0v0y9ebrwbd3oab35vsgv7ow8egswptkks3nq18dwkuqym4r3279u4i9kwettiuxrh3dgp4duccy846l9yfq53k1zo381tw811n0gu10iryvqrdjqzdeldgxwuff0o9u93falsoojg1yutf8zbx237aexrunq2fpud73e0vfhep7mfcb5supjf18nh75kuktxfeae5x866ppnbw9vw5hqs7ysktgiwu5qwkdgntxq3711mo8irc5fag7e0c0n5d0khqrtcfprqiasrkn0vw5vq31bbd5o3ca7y2p8gabp9q1h9cjbpyqg49fplms7bbjbl0ko0chh6a9016mnflskvjn0v3ihrcy7esedthrht3j69wu4xh2qkjvqru2y63qtowwp40pln1xv4ypztqpy5jucdnvzdh6uxpp78wnvxq2m282ihz3w2odtbaq0w6nd98cm6v66e2jlgm84yxlbjysa3poiioz4cin2zur8zx7yt3f68bvt0lek8go2mcfhy4s92nbe64tq7tmty564zb9ve3z5cnx5pyyfc3f7ka6sm14qici2ak2v6kjz7ye7a6z5bzg12uvi1d4w3d9ajw9m87neyoykqhu8twu4icgihwgz68r4cnpi0kjoksifo7kn9us59jnp2kzzlxdpbp09hpkrzwjhq33lwul3lt8g42z9oyup4ul9vnsxznby7sx455pg5i4jo54cjun8ykc1kl2cm1o3g2fkrh9ksf0ek2ius7rew4rvrsb69h8intdhmsob259hu6oca2rj55n1x18lpt66cdkt5y3d5379m68vcw78bys5w2pkflrfaiqza9t1ny90rw8pmox66f9ol8u6b44zajlwh3375sv4lseegkj33az3kzhktw5p4luac89j4d7d6pdzlm8zgkl5q69i8uekmlq3to3phqzrc6teryqs0ilgncs == \2\5\2\x\n\e\q\e\a\3\j\4\s\u\1\m\s\q\c\5\j\y\3\d\r\g\w\6\5\v\c\r\k\p\t\c\i\9\h\0\e\z\o\2\m\7\u\z\q\b\7\s\g\b\k\s\c\0\i\9\8\l\7\m\w\1\n\j\0\v\0\y\9\e\b\r\w\b\d\3\o\a\b\3\5\v\s\g\v\7\o\w\8\e\g\s\w\p\t\k\k\s\3\n\q\1\8\d\w\k\u\q\y\m\4\r\3\2\7\9\u\4\i\9\k\w\e\t\t\i\u\x\r\h\3\d\g\p\4\d\u\c\c\y\8\4\6\l\9\y\f\q\5\3\k\1\z\o\3\8\1\t\w\8\1\1\n\0\g\u\1\0\i\r\y\v\q\r\d\j\q\z\d\e\l\d\g\x\w\u\f\f\0\o\9\u\9\3\f\a\l\s\o\o\j\g\1\y\u\t\f\8\z\b\x\2\3\7\a\e\x\r\u\n\q\2\f\p\u\d\7\3\e\0\v\f\h\e\p\7\m\f\c\b\5\s\u\p\j\f\1\8\n\h\7\5\k\u\k\t\x\f\e\a\e\5\x\8\6\6\p\p\n\b\w\9\v\w\5\h\q\s\7\y\s\k\t\g\i\w\u\5\q\w\k\d\g\n\t\x\q\3\7\1\1\m\o\8\i\r\c\5\f\a\g\7\e\0\c\0\n\5\d\0\k\h\q\r\t\c\f\p\r\q\i\a\s\r\k\n\0\v\w\5\v\q\3\1\b\b\d\5\o\3\c\a\7\y\2\p\8\g\a\b\p\9\q\1\h\9\c\j\b\p\y\q\g\4\9\f\p\l\m\s\7\b\b\j\b\l\0\k\o\0\c\h\h\6\a\9\0\1\6\m\n\f\l\s\k\v\j\n\0\v\3\i\h\r\c\y\7\e\s\e\d\t\h\r\h\t\3\j\6\9\w\u\4\x\h\2\q\k\j\v\q\r\u\2\y\6\3\q\t\o\w\w\p\4\0\p\l\n\1\x\v\4\y\p\z\t\q\p\y\5\j\u\c\d\n\v\z\d\h\6\u\x\p\p\7\8\w\n\v\x\q\2\m\2\8\2\i\h\z\3\w\2\o\d\t\b\a\q\0\w\6\n\d\9\8\c\m\6\v\6\6\e\2\j\l\g\m\8\4\y\x\l\b\j\y\s\a\3\p\o\i\i\o\z\4\c\i\n\2\z\u\r\8\z\x\7\y\t\3\f\6\8\b\v\t\0\l\e\k\8\g\o\2\m\c\f\h\y\4\s\9\2\n\b\e\6\4\t\q\7\t\m\t\y\5\6\4\z\b\9\v\e\3\z\5\c\n\x\5\p\y\y\f\c\3\f\7\k\a\6\s\m\1\4\q\i\c\i\2\a\k\2\v\6\k\j\z\7\y\e\7\a\6\z\5\b\z\g\1\2\u\v\i\1\d\4\w\3\d\9\a\j\w\9\m\8\7\n\e\y\o\y\k\q\h\u\8\t\w\u\4\i\c\g\i\h\w\g\z\6\8\r\4\c\n\p\i\0\k\j\o\k\s\i\f\o\7\k\n\9\u\s\5\9\j\n\p\2\k\z\z\l\x\d\p\b\p\0\9\h\p\k\r\z\w\j\h\q\3\3\l\w\u\l\3\l\t\8\g\4\2\z\9\o\y\u\p\4\u\l\9\v\n\s\x\z\n\b\y\7\s\x\4\5\5\p\g\5\i\4\j\o\5\4\c\j\u\n\8\y\k\c\1\k\l\2\c\m\1\o\3\g\2\f\k\r\h\9\k\s\f\0\e\k\2\i\u\s\7\r\e\w\4\r\v\r\s\b\6\9\h\8\i\n\t\d\h\m\s\o\b\2\5\9\h\u\6\o\c\a\2\r\j\5\5\n\1\x\1\8\l\p\t\6\6\c\d\k\t\5\y\3\d\5\3\7\9\m\6\8\v\c\w\7\8\b\y\s\5\w\2\p\k\f\l\r\f\a\i\q\z\a\9\t\1\n\y\9\0\r\w\8\p\m\o\x\6\6\f\9\o\l\8\u\6\b\4\4\z\a\j\l\w\h\3\3\7\5\s\v\4\l\s\e\e\g\k\j\3\3\a\z\3\k\z\h\k\t\w\5\p\4\l\u\a\c\8\9\j\4\d\7\d\6\p\d\z\l\m\8\z\g\k\l\5\q\6\9\i\8\u\e\k\m\l\q\3\t\o\3\p\h\q\z\r\c\6\t\e\r\y\q\s\0\i\l\g\n\c\s ]] 00:12:41.581 14:38:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:12:41.581 14:38:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 252xneqea3j4su1msqc5jy3drgw65vcrkptci9h0ezo2m7uzqb7sgbksc0i98l7mw1nj0v0y9ebrwbd3oab35vsgv7ow8egswptkks3nq18dwkuqym4r3279u4i9kwettiuxrh3dgp4duccy846l9yfq53k1zo381tw811n0gu10iryvqrdjqzdeldgxwuff0o9u93falsoojg1yutf8zbx237aexrunq2fpud73e0vfhep7mfcb5supjf18nh75kuktxfeae5x866ppnbw9vw5hqs7ysktgiwu5qwkdgntxq3711mo8irc5fag7e0c0n5d0khqrtcfprqiasrkn0vw5vq31bbd5o3ca7y2p8gabp9q1h9cjbpyqg49fplms7bbjbl0ko0chh6a9016mnflskvjn0v3ihrcy7esedthrht3j69wu4xh2qkjvqru2y63qtowwp40pln1xv4ypztqpy5jucdnvzdh6uxpp78wnvxq2m282ihz3w2odtbaq0w6nd98cm6v66e2jlgm84yxlbjysa3poiioz4cin2zur8zx7yt3f68bvt0lek8go2mcfhy4s92nbe64tq7tmty564zb9ve3z5cnx5pyyfc3f7ka6sm14qici2ak2v6kjz7ye7a6z5bzg12uvi1d4w3d9ajw9m87neyoykqhu8twu4icgihwgz68r4cnpi0kjoksifo7kn9us59jnp2kzzlxdpbp09hpkrzwjhq33lwul3lt8g42z9oyup4ul9vnsxznby7sx455pg5i4jo54cjun8ykc1kl2cm1o3g2fkrh9ksf0ek2ius7rew4rvrsb69h8intdhmsob259hu6oca2rj55n1x18lpt66cdkt5y3d5379m68vcw78bys5w2pkflrfaiqza9t1ny90rw8pmox66f9ol8u6b44zajlwh3375sv4lseegkj33az3kzhktw5p4luac89j4d7d6pdzlm8zgkl5q69i8uekmlq3to3phqzrc6teryqs0ilgncs == \2\5\2\x\n\e\q\e\a\3\j\4\s\u\1\m\s\q\c\5\j\y\3\d\r\g\w\6\5\v\c\r\k\p\t\c\i\9\h\0\e\z\o\2\m\7\u\z\q\b\7\s\g\b\k\s\c\0\i\9\8\l\7\m\w\1\n\j\0\v\0\y\9\e\b\r\w\b\d\3\o\a\b\3\5\v\s\g\v\7\o\w\8\e\g\s\w\p\t\k\k\s\3\n\q\1\8\d\w\k\u\q\y\m\4\r\3\2\7\9\u\4\i\9\k\w\e\t\t\i\u\x\r\h\3\d\g\p\4\d\u\c\c\y\8\4\6\l\9\y\f\q\5\3\k\1\z\o\3\8\1\t\w\8\1\1\n\0\g\u\1\0\i\r\y\v\q\r\d\j\q\z\d\e\l\d\g\x\w\u\f\f\0\o\9\u\9\3\f\a\l\s\o\o\j\g\1\y\u\t\f\8\z\b\x\2\3\7\a\e\x\r\u\n\q\2\f\p\u\d\7\3\e\0\v\f\h\e\p\7\m\f\c\b\5\s\u\p\j\f\1\8\n\h\7\5\k\u\k\t\x\f\e\a\e\5\x\8\6\6\p\p\n\b\w\9\v\w\5\h\q\s\7\y\s\k\t\g\i\w\u\5\q\w\k\d\g\n\t\x\q\3\7\1\1\m\o\8\i\r\c\5\f\a\g\7\e\0\c\0\n\5\d\0\k\h\q\r\t\c\f\p\r\q\i\a\s\r\k\n\0\v\w\5\v\q\3\1\b\b\d\5\o\3\c\a\7\y\2\p\8\g\a\b\p\9\q\1\h\9\c\j\b\p\y\q\g\4\9\f\p\l\m\s\7\b\b\j\b\l\0\k\o\0\c\h\h\6\a\9\0\1\6\m\n\f\l\s\k\v\j\n\0\v\3\i\h\r\c\y\7\e\s\e\d\t\h\r\h\t\3\j\6\9\w\u\4\x\h\2\q\k\j\v\q\r\u\2\y\6\3\q\t\o\w\w\p\4\0\p\l\n\1\x\v\4\y\p\z\t\q\p\y\5\j\u\c\d\n\v\z\d\h\6\u\x\p\p\7\8\w\n\v\x\q\2\m\2\8\2\i\h\z\3\w\2\o\d\t\b\a\q\0\w\6\n\d\9\8\c\m\6\v\6\6\e\2\j\l\g\m\8\4\y\x\l\b\j\y\s\a\3\p\o\i\i\o\z\4\c\i\n\2\z\u\r\8\z\x\7\y\t\3\f\6\8\b\v\t\0\l\e\k\8\g\o\2\m\c\f\h\y\4\s\9\2\n\b\e\6\4\t\q\7\t\m\t\y\5\6\4\z\b\9\v\e\3\z\5\c\n\x\5\p\y\y\f\c\3\f\7\k\a\6\s\m\1\4\q\i\c\i\2\a\k\2\v\6\k\j\z\7\y\e\7\a\6\z\5\b\z\g\1\2\u\v\i\1\d\4\w\3\d\9\a\j\w\9\m\8\7\n\e\y\o\y\k\q\h\u\8\t\w\u\4\i\c\g\i\h\w\g\z\6\8\r\4\c\n\p\i\0\k\j\o\k\s\i\f\o\7\k\n\9\u\s\5\9\j\n\p\2\k\z\z\l\x\d\p\b\p\0\9\h\p\k\r\z\w\j\h\q\3\3\l\w\u\l\3\l\t\8\g\4\2\z\9\o\y\u\p\4\u\l\9\v\n\s\x\z\n\b\y\7\s\x\4\5\5\p\g\5\i\4\j\o\5\4\c\j\u\n\8\y\k\c\1\k\l\2\c\m\1\o\3\g\2\f\k\r\h\9\k\s\f\0\e\k\2\i\u\s\7\r\e\w\4\r\v\r\s\b\6\9\h\8\i\n\t\d\h\m\s\o\b\2\5\9\h\u\6\o\c\a\2\r\j\5\5\n\1\x\1\8\l\p\t\6\6\c\d\k\t\5\y\3\d\5\3\7\9\m\6\8\v\c\w\7\8\b\y\s\5\w\2\p\k\f\l\r\f\a\i\q\z\a\9\t\1\n\y\9\0\r\w\8\p\m\o\x\6\6\f\9\o\l\8\u\6\b\4\4\z\a\j\l\w\h\3\3\7\5\s\v\4\l\s\e\e\g\k\j\3\3\a\z\3\k\z\h\k\t\w\5\p\4\l\u\a\c\8\9\j\4\d\7\d\6\p\d\z\l\m\8\z\g\k\l\5\q\6\9\i\8\u\e\k\m\l\q\3\t\o\3\p\h\q\z\r\c\6\t\e\r\y\q\s\0\i\l\g\n\c\s ]] 00:12:41.581 14:38:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:12:41.839 14:38:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:12:41.839 14:38:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:12:41.839 14:38:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:41.839 14:38:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:41.839 [2024-11-04 14:38:50.826055] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:41.839 [2024-11-04 14:38:50.826121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60336 ] 00:12:41.839 { 00:12:41.839 "subsystems": [ 00:12:41.839 { 00:12:41.839 "subsystem": "bdev", 00:12:41.839 "config": [ 00:12:41.839 { 00:12:41.839 "params": { 00:12:41.839 "block_size": 512, 00:12:41.839 "num_blocks": 1048576, 00:12:41.839 "name": "malloc0" 00:12:41.839 }, 00:12:41.839 "method": "bdev_malloc_create" 00:12:41.839 }, 00:12:41.839 { 00:12:41.839 "params": { 00:12:41.839 "filename": "/dev/zram1", 00:12:41.839 "name": "uring0" 00:12:41.839 }, 00:12:41.839 "method": "bdev_uring_create" 00:12:41.839 }, 00:12:41.839 { 00:12:41.839 "method": "bdev_wait_for_examine" 00:12:41.839 } 00:12:41.839 ] 00:12:41.839 } 00:12:41.839 ] 00:12:41.839 } 00:12:41.839 [2024-11-04 14:38:50.965591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.116 [2024-11-04 14:38:51.000539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.116 [2024-11-04 14:38:51.031200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:43.049  [2024-11-04T14:38:53.563Z] Copying: 185/512 [MB] (185 MBps) [2024-11-04T14:38:54.128Z] Copying: 371/512 [MB] (186 MBps) [2024-11-04T14:38:54.128Z] Copying: 512/512 [MB] (average 189 MBps) 00:12:44.988 00:12:44.988 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:12:44.988 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:12:44.988 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:12:44.988 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:12:44.988 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:12:44.988 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:12:44.988 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:44.988 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:44.988 [2024-11-04 14:38:54.078536] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:44.988 [2024-11-04 14:38:54.078599] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60381 ] 00:12:44.988 { 00:12:44.988 "subsystems": [ 00:12:44.988 { 00:12:44.988 "subsystem": "bdev", 00:12:44.988 "config": [ 00:12:44.988 { 00:12:44.988 "params": { 00:12:44.988 "block_size": 512, 00:12:44.988 "num_blocks": 1048576, 00:12:44.988 "name": "malloc0" 00:12:44.988 }, 00:12:44.988 "method": "bdev_malloc_create" 00:12:44.988 }, 00:12:44.988 { 00:12:44.988 "params": { 00:12:44.988 "filename": "/dev/zram1", 00:12:44.988 "name": "uring0" 00:12:44.988 }, 00:12:44.988 "method": "bdev_uring_create" 00:12:44.988 }, 00:12:44.988 { 00:12:44.988 "params": { 00:12:44.988 "name": "uring0" 00:12:44.988 }, 00:12:44.988 "method": "bdev_uring_delete" 00:12:44.988 }, 00:12:44.988 { 00:12:44.988 "method": "bdev_wait_for_examine" 00:12:44.988 } 00:12:44.988 ] 00:12:44.988 } 00:12:44.988 ] 00:12:44.988 } 00:12:45.246 [2024-11-04 14:38:54.213519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.246 [2024-11-04 14:38:54.245418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.246 [2024-11-04 14:38:54.274155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:45.504  [2024-11-04T14:38:54.644Z] Copying: 0/0 [B] (average 0 Bps) 00:12:45.504 00:12:45.504 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:12:45.504 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:45.504 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:12:45.504 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:12:45.504 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:45.504 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:45.504 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:45.504 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:45.504 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:45.504 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:45.504 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:45.504 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:45.504 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:45.504 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:45.504 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:45.504 14:38:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:45.504 [2024-11-04 14:38:54.620709] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:45.504 [2024-11-04 14:38:54.620772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60410 ] 00:12:45.504 { 00:12:45.504 "subsystems": [ 00:12:45.504 { 00:12:45.504 "subsystem": "bdev", 00:12:45.504 "config": [ 00:12:45.504 { 00:12:45.504 "params": { 00:12:45.504 "block_size": 512, 00:12:45.504 "num_blocks": 1048576, 00:12:45.504 "name": "malloc0" 00:12:45.504 }, 00:12:45.504 "method": "bdev_malloc_create" 00:12:45.504 }, 00:12:45.504 { 00:12:45.504 "params": { 00:12:45.504 "filename": "/dev/zram1", 00:12:45.504 "name": "uring0" 00:12:45.504 }, 00:12:45.504 "method": "bdev_uring_create" 00:12:45.504 }, 00:12:45.504 { 00:12:45.504 "params": { 00:12:45.504 "name": "uring0" 00:12:45.504 }, 00:12:45.504 "method": "bdev_uring_delete" 00:12:45.504 }, 00:12:45.504 { 00:12:45.504 "method": "bdev_wait_for_examine" 00:12:45.504 } 00:12:45.504 ] 00:12:45.504 } 00:12:45.504 ] 00:12:45.504 } 00:12:45.762 [2024-11-04 14:38:54.756313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.762 [2024-11-04 14:38:54.786899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.762 [2024-11-04 14:38:54.815342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:46.019 [2024-11-04 14:38:54.939205] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:12:46.019 [2024-11-04 14:38:54.939242] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:12:46.019 [2024-11-04 14:38:54.939247] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:12:46.019 [2024-11-04 14:38:54.939252] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:46.019 [2024-11-04 14:38:55.078106] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:46.019 14:38:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:12:46.019 14:38:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:46.019 14:38:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:12:46.019 14:38:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:12:46.019 14:38:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:12:46.019 14:38:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:46.019 14:38:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:12:46.019 14:38:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:12:46.019 14:38:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:12:46.019 14:38:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:12:46.019 14:38:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:12:46.019 14:38:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:12:46.019 00:12:46.019 real 0m10.924s 00:12:46.019 user 0m7.729s 00:12:46.019 sys 0m8.942s 00:12:46.019 ************************************ 00:12:46.019 END TEST dd_uring_copy 00:12:46.019 ************************************ 00:12:46.019 14:38:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:46.019 14:38:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:46.277 00:12:46.277 real 0m11.110s 00:12:46.277 user 0m7.826s 00:12:46.277 sys 0m9.037s 00:12:46.277 14:38:55 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:46.277 ************************************ 00:12:46.277 END TEST spdk_dd_uring 00:12:46.277 ************************************ 00:12:46.277 14:38:55 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:12:46.277 14:38:55 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:12:46.277 14:38:55 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:46.277 14:38:55 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:46.277 14:38:55 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:46.277 ************************************ 00:12:46.277 START TEST spdk_dd_sparse 00:12:46.277 ************************************ 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:12:46.277 * Looking for test storage... 00:12:46.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lcov --version 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:46.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.277 --rc genhtml_branch_coverage=1 00:12:46.277 --rc genhtml_function_coverage=1 00:12:46.277 --rc genhtml_legend=1 00:12:46.277 --rc geninfo_all_blocks=1 00:12:46.277 --rc geninfo_unexecuted_blocks=1 00:12:46.277 00:12:46.277 ' 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:46.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.277 --rc genhtml_branch_coverage=1 00:12:46.277 --rc genhtml_function_coverage=1 00:12:46.277 --rc genhtml_legend=1 00:12:46.277 --rc geninfo_all_blocks=1 00:12:46.277 --rc geninfo_unexecuted_blocks=1 00:12:46.277 00:12:46.277 ' 00:12:46.277 14:38:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:46.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.278 --rc genhtml_branch_coverage=1 00:12:46.278 --rc genhtml_function_coverage=1 00:12:46.278 --rc genhtml_legend=1 00:12:46.278 --rc geninfo_all_blocks=1 00:12:46.278 --rc geninfo_unexecuted_blocks=1 00:12:46.278 00:12:46.278 ' 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:46.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.278 --rc genhtml_branch_coverage=1 00:12:46.278 --rc genhtml_function_coverage=1 00:12:46.278 --rc genhtml_legend=1 00:12:46.278 --rc geninfo_all_blocks=1 00:12:46.278 --rc geninfo_unexecuted_blocks=1 00:12:46.278 00:12:46.278 ' 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:12:46.278 1+0 records in 00:12:46.278 1+0 records out 00:12:46.278 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00507796 s, 826 MB/s 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:12:46.278 1+0 records in 00:12:46.278 1+0 records out 00:12:46.278 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00690407 s, 608 MB/s 00:12:46.278 14:38:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:12:46.536 1+0 records in 00:12:46.536 1+0 records out 00:12:46.536 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00617562 s, 679 MB/s 00:12:46.536 14:38:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:12:46.536 14:38:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:46.536 14:38:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:46.536 14:38:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:46.536 ************************************ 00:12:46.536 START TEST dd_sparse_file_to_file 00:12:46.536 ************************************ 00:12:46.536 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1127 -- # file_to_file 00:12:46.536 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:12:46.536 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:12:46.536 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:46.536 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:12:46.536 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:12:46.536 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:12:46.536 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:12:46.536 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:12:46.536 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:12:46.536 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:46.536 [2024-11-04 14:38:55.463526] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:46.536 [2024-11-04 14:38:55.463591] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60508 ] 00:12:46.536 { 00:12:46.536 "subsystems": [ 00:12:46.536 { 00:12:46.536 "subsystem": "bdev", 00:12:46.536 "config": [ 00:12:46.536 { 00:12:46.536 "params": { 00:12:46.536 "block_size": 4096, 00:12:46.536 "filename": "dd_sparse_aio_disk", 00:12:46.536 "name": "dd_aio" 00:12:46.536 }, 00:12:46.536 "method": "bdev_aio_create" 00:12:46.536 }, 00:12:46.536 { 00:12:46.536 "params": { 00:12:46.536 "lvs_name": "dd_lvstore", 00:12:46.536 "bdev_name": "dd_aio" 00:12:46.536 }, 00:12:46.536 "method": "bdev_lvol_create_lvstore" 00:12:46.536 }, 00:12:46.536 { 00:12:46.536 "method": "bdev_wait_for_examine" 00:12:46.536 } 00:12:46.536 ] 00:12:46.536 } 00:12:46.536 ] 00:12:46.536 } 00:12:46.536 [2024-11-04 14:38:55.602701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.536 [2024-11-04 14:38:55.638884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.536 [2024-11-04 14:38:55.670278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:46.794  [2024-11-04T14:38:55.934Z] Copying: 12/36 [MB] (average 1333 MBps) 00:12:46.794 00:12:46.794 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:12:46.794 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:12:46.794 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:12:46.794 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:12:46.794 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:12:46.794 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:12:46.794 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:12:46.794 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:12:46.794 ************************************ 00:12:46.794 END TEST dd_sparse_file_to_file 00:12:46.794 ************************************ 00:12:46.794 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:12:46.794 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:12:46.794 00:12:46.794 real 0m0.466s 00:12:46.794 user 0m0.274s 00:12:46.794 sys 0m0.212s 00:12:46.794 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:46.794 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:46.794 14:38:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:12:46.794 14:38:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:46.795 14:38:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:46.795 14:38:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:46.795 ************************************ 00:12:46.795 START TEST dd_sparse_file_to_bdev 00:12:46.795 ************************************ 00:12:46.795 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1127 -- # file_to_bdev 00:12:46.795 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:46.795 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:12:46.795 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:12:46.795 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:12:46.795 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:12:46.795 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:12:46.795 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:12:46.795 14:38:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:12:47.069 [2024-11-04 14:38:55.958723] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:47.069 [2024-11-04 14:38:55.958778] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60551 ] 00:12:47.069 { 00:12:47.069 "subsystems": [ 00:12:47.069 { 00:12:47.069 "subsystem": "bdev", 00:12:47.069 "config": [ 00:12:47.069 { 00:12:47.069 "params": { 00:12:47.069 "block_size": 4096, 00:12:47.069 "filename": "dd_sparse_aio_disk", 00:12:47.069 "name": "dd_aio" 00:12:47.069 }, 00:12:47.069 "method": "bdev_aio_create" 00:12:47.069 }, 00:12:47.069 { 00:12:47.069 "params": { 00:12:47.069 "lvs_name": "dd_lvstore", 00:12:47.069 "lvol_name": "dd_lvol", 00:12:47.069 "size_in_mib": 36, 00:12:47.069 "thin_provision": true 00:12:47.069 }, 00:12:47.069 "method": "bdev_lvol_create" 00:12:47.069 }, 00:12:47.069 { 00:12:47.069 "method": "bdev_wait_for_examine" 00:12:47.069 } 00:12:47.069 ] 00:12:47.069 } 00:12:47.069 ] 00:12:47.069 } 00:12:47.069 [2024-11-04 14:38:56.095759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.069 [2024-11-04 14:38:56.130852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.069 [2024-11-04 14:38:56.161920] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:47.327  [2024-11-04T14:38:56.467Z] Copying: 12/36 [MB] (average 600 MBps) 00:12:47.327 00:12:47.327 00:12:47.327 real 0m0.433s 00:12:47.327 user 0m0.261s 00:12:47.327 sys 0m0.200s 00:12:47.327 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:47.328 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:12:47.328 ************************************ 00:12:47.328 END TEST dd_sparse_file_to_bdev 00:12:47.328 ************************************ 00:12:47.328 14:38:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:12:47.328 14:38:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:47.328 14:38:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:47.328 14:38:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:47.328 ************************************ 00:12:47.328 START TEST dd_sparse_bdev_to_file 00:12:47.328 ************************************ 00:12:47.328 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1127 -- # bdev_to_file 00:12:47.328 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:12:47.328 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:12:47.328 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:47.328 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:12:47.328 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:12:47.328 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:12:47.328 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:12:47.328 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:47.328 [2024-11-04 14:38:56.430006] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:47.328 [2024-11-04 14:38:56.430072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60583 ] 00:12:47.328 { 00:12:47.328 "subsystems": [ 00:12:47.328 { 00:12:47.328 "subsystem": "bdev", 00:12:47.328 "config": [ 00:12:47.328 { 00:12:47.328 "params": { 00:12:47.328 "block_size": 4096, 00:12:47.328 "filename": "dd_sparse_aio_disk", 00:12:47.328 "name": "dd_aio" 00:12:47.328 }, 00:12:47.328 "method": "bdev_aio_create" 00:12:47.328 }, 00:12:47.328 { 00:12:47.328 "method": "bdev_wait_for_examine" 00:12:47.328 } 00:12:47.328 ] 00:12:47.328 } 00:12:47.328 ] 00:12:47.328 } 00:12:47.585 [2024-11-04 14:38:56.566381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.585 [2024-11-04 14:38:56.602231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.585 [2024-11-04 14:38:56.633544] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:47.585  [2024-11-04T14:38:56.983Z] Copying: 12/36 [MB] (average 1000 MBps) 00:12:47.843 00:12:47.843 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:12:47.843 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:12:47.843 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:12:47.843 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:12:47.843 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:12:47.843 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:12:47.843 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:12:47.843 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:12:47.843 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:12:47.843 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:12:47.843 00:12:47.843 real 0m0.438s 00:12:47.843 user 0m0.254s 00:12:47.843 sys 0m0.206s 00:12:47.843 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:47.843 ************************************ 00:12:47.843 END TEST dd_sparse_bdev_to_file 00:12:47.843 ************************************ 00:12:47.843 14:38:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:47.843 14:38:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:12:47.843 14:38:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:12:47.843 14:38:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:12:47.843 14:38:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:12:47.843 14:38:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:12:47.843 00:12:47.843 real 0m1.687s 00:12:47.843 user 0m0.927s 00:12:47.843 sys 0m0.780s 00:12:47.843 14:38:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:47.843 14:38:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:47.843 ************************************ 00:12:47.843 END TEST spdk_dd_sparse 00:12:47.843 ************************************ 00:12:47.843 14:38:56 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:12:47.843 14:38:56 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:47.843 14:38:56 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:47.843 14:38:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:47.843 ************************************ 00:12:47.843 START TEST spdk_dd_negative 00:12:47.843 ************************************ 00:12:47.843 14:38:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:12:47.843 * Looking for test storage... 00:12:48.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:48.102 14:38:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:48.102 14:38:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:48.103 14:38:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lcov --version 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:48.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.103 --rc genhtml_branch_coverage=1 00:12:48.103 --rc genhtml_function_coverage=1 00:12:48.103 --rc genhtml_legend=1 00:12:48.103 --rc geninfo_all_blocks=1 00:12:48.103 --rc geninfo_unexecuted_blocks=1 00:12:48.103 00:12:48.103 ' 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:48.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.103 --rc genhtml_branch_coverage=1 00:12:48.103 --rc genhtml_function_coverage=1 00:12:48.103 --rc genhtml_legend=1 00:12:48.103 --rc geninfo_all_blocks=1 00:12:48.103 --rc geninfo_unexecuted_blocks=1 00:12:48.103 00:12:48.103 ' 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:48.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.103 --rc genhtml_branch_coverage=1 00:12:48.103 --rc genhtml_function_coverage=1 00:12:48.103 --rc genhtml_legend=1 00:12:48.103 --rc geninfo_all_blocks=1 00:12:48.103 --rc geninfo_unexecuted_blocks=1 00:12:48.103 00:12:48.103 ' 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:48.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.103 --rc genhtml_branch_coverage=1 00:12:48.103 --rc genhtml_function_coverage=1 00:12:48.103 --rc genhtml_legend=1 00:12:48.103 --rc geninfo_all_blocks=1 00:12:48.103 --rc geninfo_unexecuted_blocks=1 00:12:48.103 00:12:48.103 ' 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:48.103 ************************************ 00:12:48.103 START TEST dd_invalid_arguments 00:12:48.103 ************************************ 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1127 -- # invalid_arguments 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:48.103 14:38:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:12:48.103 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:12:48.103 00:12:48.103 CPU options: 00:12:48.103 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:12:48.103 (like [0,1,10]) 00:12:48.103 --lcores lcore to CPU mapping list. The list is in the format: 00:12:48.103 [<,lcores[@CPUs]>...] 00:12:48.103 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:12:48.103 Within the group, '-' is used for range separator, 00:12:48.103 ',' is used for single number separator. 00:12:48.103 '( )' can be omitted for single element group, 00:12:48.103 '@' can be omitted if cpus and lcores have the same value 00:12:48.103 --disable-cpumask-locks Disable CPU core lock files. 00:12:48.103 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:12:48.103 pollers in the app support interrupt mode) 00:12:48.103 -p, --main-core main (primary) core for DPDK 00:12:48.103 00:12:48.103 Configuration options: 00:12:48.103 -c, --config, --json JSON config file 00:12:48.104 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:12:48.104 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:12:48.104 --wait-for-rpc wait for RPCs to initialize subsystems 00:12:48.104 --rpcs-allowed comma-separated list of permitted RPCS 00:12:48.104 --json-ignore-init-errors don't exit on invalid config entry 00:12:48.104 00:12:48.104 Memory options: 00:12:48.104 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:12:48.104 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:12:48.104 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:12:48.104 -R, --huge-unlink unlink huge files after initialization 00:12:48.104 -n, --mem-channels number of memory channels used for DPDK 00:12:48.104 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:12:48.104 --msg-mempool-size global message memory pool size in count (default: 262143) 00:12:48.104 --no-huge run without using hugepages 00:12:48.104 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:12:48.104 -i, --shm-id shared memory ID (optional) 00:12:48.104 -g, --single-file-segments force creating just one hugetlbfs file 00:12:48.104 00:12:48.104 PCI options: 00:12:48.104 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:12:48.104 -B, --pci-blocked pci addr to block (can be used more than once) 00:12:48.104 -u, --no-pci disable PCI access 00:12:48.104 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:12:48.104 00:12:48.104 Log options: 00:12:48.104 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:12:48.104 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:12:48.104 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:12:48.104 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:12:48.104 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:12:48.104 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:12:48.104 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:12:48.104 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:12:48.104 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:12:48.104 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:12:48.104 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:12:48.104 --silence-noticelog disable notice level logging to stderr 00:12:48.104 00:12:48.104 Trace options: 00:12:48.104 --num-trace-entries number of trace entries for each core, must be power of 2, 00:12:48.104 setting 0 to disable trace (default 32768) 00:12:48.104 Tracepoints vary in size and can use more than one trace entry. 00:12:48.104 -e, --tpoint-group [:] 00:12:48.104 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:12:48.104 [2024-11-04 14:38:57.112537] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:12:48.104 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:12:48.104 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:12:48.104 bdev_raid, scheduler, all). 00:12:48.104 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:12:48.104 a tracepoint group. First tpoint inside a group can be enabled by 00:12:48.104 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:12:48.104 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:12:48.104 in /include/spdk_internal/trace_defs.h 00:12:48.104 00:12:48.104 Other options: 00:12:48.104 -h, --help show this usage 00:12:48.104 -v, --version print SPDK version 00:12:48.104 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:12:48.104 --env-context Opaque context for use of the env implementation 00:12:48.104 00:12:48.104 Application specific: 00:12:48.104 [--------- DD Options ---------] 00:12:48.104 --if Input file. Must specify either --if or --ib. 00:12:48.104 --ib Input bdev. Must specifier either --if or --ib 00:12:48.104 --of Output file. Must specify either --of or --ob. 00:12:48.104 --ob Output bdev. Must specify either --of or --ob. 00:12:48.104 --iflag Input file flags. 00:12:48.104 --oflag Output file flags. 00:12:48.104 --bs I/O unit size (default: 4096) 00:12:48.104 --qd Queue depth (default: 2) 00:12:48.104 --count I/O unit count. The number of I/O units to copy. (default: all) 00:12:48.104 --skip Skip this many I/O units at start of input. (default: 0) 00:12:48.104 --seek Skip this many I/O units at start of output. (default: 0) 00:12:48.104 --aio Force usage of AIO. (by default io_uring is used if available) 00:12:48.104 --sparse Enable hole skipping in input target 00:12:48.104 Available iflag and oflag values: 00:12:48.104 append - append mode 00:12:48.104 direct - use direct I/O for data 00:12:48.104 directory - fail unless a directory 00:12:48.104 dsync - use synchronized I/O for data 00:12:48.104 noatime - do not update access time 00:12:48.104 noctty - do not assign controlling terminal from file 00:12:48.104 nofollow - do not follow symlinks 00:12:48.104 nonblock - use non-blocking I/O 00:12:48.104 sync - use synchronized I/O for data and metadata 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:48.104 00:12:48.104 real 0m0.046s 00:12:48.104 user 0m0.027s 00:12:48.104 sys 0m0.018s 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:12:48.104 ************************************ 00:12:48.104 END TEST dd_invalid_arguments 00:12:48.104 ************************************ 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:48.104 ************************************ 00:12:48.104 START TEST dd_double_input 00:12:48.104 ************************************ 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1127 -- # double_input 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:12:48.104 [2024-11-04 14:38:57.200031] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:48.104 00:12:48.104 real 0m0.049s 00:12:48.104 user 0m0.033s 00:12:48.104 sys 0m0.016s 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:48.104 ************************************ 00:12:48.104 END TEST dd_double_input 00:12:48.104 ************************************ 00:12:48.104 14:38:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:48.363 ************************************ 00:12:48.363 START TEST dd_double_output 00:12:48.363 ************************************ 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1127 -- # double_output 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:12:48.363 [2024-11-04 14:38:57.285244] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:48.363 00:12:48.363 real 0m0.048s 00:12:48.363 user 0m0.034s 00:12:48.363 sys 0m0.013s 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:12:48.363 ************************************ 00:12:48.363 END TEST dd_double_output 00:12:48.363 ************************************ 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:48.363 ************************************ 00:12:48.363 START TEST dd_no_input 00:12:48.363 ************************************ 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1127 -- # no_input 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:12:48.363 [2024-11-04 14:38:57.370053] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:48.363 00:12:48.363 real 0m0.046s 00:12:48.363 user 0m0.030s 00:12:48.363 sys 0m0.016s 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:12:48.363 ************************************ 00:12:48.363 END TEST dd_no_input 00:12:48.363 ************************************ 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:48.363 ************************************ 00:12:48.363 START TEST dd_no_output 00:12:48.363 ************************************ 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1127 -- # no_output 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.363 14:38:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:48.364 [2024-11-04 14:38:57.449341] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:48.364 00:12:48.364 real 0m0.043s 00:12:48.364 user 0m0.027s 00:12:48.364 sys 0m0.015s 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:12:48.364 ************************************ 00:12:48.364 END TEST dd_no_output 00:12:48.364 ************************************ 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:48.364 ************************************ 00:12:48.364 START TEST dd_wrong_blocksize 00:12:48.364 ************************************ 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1127 -- # wrong_blocksize 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.364 14:38:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:12:48.626 [2024-11-04 14:38:57.536695] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:48.626 00:12:48.626 real 0m0.049s 00:12:48.626 user 0m0.029s 00:12:48.626 sys 0m0.019s 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:12:48.626 ************************************ 00:12:48.626 END TEST dd_wrong_blocksize 00:12:48.626 ************************************ 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:48.626 ************************************ 00:12:48.626 START TEST dd_smaller_blocksize 00:12:48.626 ************************************ 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1127 -- # smaller_blocksize 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:48.626 14:38:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:12:48.626 [2024-11-04 14:38:57.623167] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:48.626 [2024-11-04 14:38:57.623229] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60810 ] 00:12:48.626 [2024-11-04 14:38:57.753983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.883 [2024-11-04 14:38:57.789417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.883 [2024-11-04 14:38:57.819978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:49.141 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:12:49.141 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:12:49.141 [2024-11-04 14:38:58.252214] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:12:49.141 [2024-11-04 14:38:58.252274] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:49.399 [2024-11-04 14:38:58.310938] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:49.399 00:12:49.399 real 0m0.771s 00:12:49.399 user 0m0.224s 00:12:49.399 sys 0m0.440s 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:12:49.399 ************************************ 00:12:49.399 END TEST dd_smaller_blocksize 00:12:49.399 ************************************ 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:49.399 ************************************ 00:12:49.399 START TEST dd_invalid_count 00:12:49.399 ************************************ 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1127 -- # invalid_count 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:49.399 [2024-11-04 14:38:58.423466] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:49.399 00:12:49.399 real 0m0.046s 00:12:49.399 user 0m0.031s 00:12:49.399 sys 0m0.014s 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:12:49.399 ************************************ 00:12:49.399 END TEST dd_invalid_count 00:12:49.399 ************************************ 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:49.399 ************************************ 00:12:49.399 START TEST dd_invalid_oflag 00:12:49.399 ************************************ 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1127 -- # invalid_oflag 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:49.399 [2024-11-04 14:38:58.510145] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:49.399 00:12:49.399 real 0m0.050s 00:12:49.399 user 0m0.030s 00:12:49.399 sys 0m0.020s 00:12:49.399 ************************************ 00:12:49.399 END TEST dd_invalid_oflag 00:12:49.399 ************************************ 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:49.399 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:49.657 ************************************ 00:12:49.657 START TEST dd_invalid_iflag 00:12:49.657 ************************************ 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1127 -- # invalid_iflag 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:49.657 [2024-11-04 14:38:58.593594] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:49.657 ************************************ 00:12:49.657 END TEST dd_invalid_iflag 00:12:49.657 ************************************ 00:12:49.657 00:12:49.657 real 0m0.048s 00:12:49.657 user 0m0.030s 00:12:49.657 sys 0m0.017s 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:49.657 ************************************ 00:12:49.657 START TEST dd_unknown_flag 00:12:49.657 ************************************ 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1127 -- # unknown_flag 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:49.657 14:38:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:49.657 [2024-11-04 14:38:58.683239] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:49.657 [2024-11-04 14:38:58.683304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60902 ] 00:12:49.915 [2024-11-04 14:38:58.819506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.916 [2024-11-04 14:38:58.857895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.916 [2024-11-04 14:38:58.890556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:49.916 [2024-11-04 14:38:58.915904] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:12:49.916 [2024-11-04 14:38:58.915950] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:49.916 [2024-11-04 14:38:58.915985] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:12:49.916 [2024-11-04 14:38:58.915992] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:49.916 [2024-11-04 14:38:58.916152] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:12:49.916 [2024-11-04 14:38:58.916160] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:49.916 [2024-11-04 14:38:58.916192] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:12:49.916 [2024-11-04 14:38:58.916198] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:12:49.916 [2024-11-04 14:38:58.975317] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:49.916 ************************************ 00:12:49.916 END TEST dd_unknown_flag 00:12:49.916 ************************************ 00:12:49.916 14:38:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:12:49.916 14:38:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:49.916 14:38:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:12:49.916 14:38:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:12:49.916 14:38:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:12:49.916 14:38:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:49.916 00:12:49.916 real 0m0.372s 00:12:49.916 user 0m0.185s 00:12:49.916 sys 0m0.096s 00:12:49.916 14:38:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:49.916 14:38:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:12:49.916 14:38:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:12:49.916 14:38:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:49.916 14:38:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:49.916 14:38:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:50.173 ************************************ 00:12:50.173 START TEST dd_invalid_json 00:12:50.173 ************************************ 00:12:50.173 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1127 -- # invalid_json 00:12:50.173 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:50.173 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:12:50.173 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:50.173 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:50.173 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:12:50.173 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.173 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:50.173 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.173 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:50.173 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.173 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:50.173 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:50.173 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:50.173 [2024-11-04 14:38:59.093740] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:50.173 [2024-11-04 14:38:59.093798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60925 ] 00:12:50.173 [2024-11-04 14:38:59.234212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.173 [2024-11-04 14:38:59.273067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.173 [2024-11-04 14:38:59.273127] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:12:50.173 [2024-11-04 14:38:59.273139] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:50.173 [2024-11-04 14:38:59.273145] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:50.173 [2024-11-04 14:38:59.273173] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:50.435 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:12:50.435 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:50.435 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:12:50.435 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:12:50.435 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:12:50.435 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:50.435 00:12:50.435 real 0m0.261s 00:12:50.435 user 0m0.116s 00:12:50.435 sys 0m0.044s 00:12:50.435 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:50.435 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:12:50.435 ************************************ 00:12:50.435 END TEST dd_invalid_json 00:12:50.435 ************************************ 00:12:50.435 14:38:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:50.436 ************************************ 00:12:50.436 START TEST dd_invalid_seek 00:12:50.436 ************************************ 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1127 -- # invalid_seek 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:50.436 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:12:50.436 [2024-11-04 14:38:59.394011] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:50.436 [2024-11-04 14:38:59.394073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60947 ] 00:12:50.436 { 00:12:50.436 "subsystems": [ 00:12:50.436 { 00:12:50.436 "subsystem": "bdev", 00:12:50.436 "config": [ 00:12:50.436 { 00:12:50.436 "params": { 00:12:50.436 "block_size": 512, 00:12:50.436 "num_blocks": 512, 00:12:50.436 "name": "malloc0" 00:12:50.436 }, 00:12:50.436 "method": "bdev_malloc_create" 00:12:50.436 }, 00:12:50.436 { 00:12:50.436 "params": { 00:12:50.436 "block_size": 512, 00:12:50.436 "num_blocks": 512, 00:12:50.436 "name": "malloc1" 00:12:50.436 }, 00:12:50.436 "method": "bdev_malloc_create" 00:12:50.436 }, 00:12:50.436 { 00:12:50.436 "method": "bdev_wait_for_examine" 00:12:50.436 } 00:12:50.436 ] 00:12:50.436 } 00:12:50.436 ] 00:12:50.436 } 00:12:50.436 [2024-11-04 14:38:59.531157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.436 [2024-11-04 14:38:59.567433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.693 [2024-11-04 14:38:59.599023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:50.693 [2024-11-04 14:38:59.647880] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:12:50.693 [2024-11-04 14:38:59.647926] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:50.693 [2024-11-04 14:38:59.705619] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:50.693 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:50.694 00:12:50.694 real 0m0.389s 00:12:50.694 user 0m0.240s 00:12:50.694 sys 0m0.089s 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:12:50.694 ************************************ 00:12:50.694 END TEST dd_invalid_seek 00:12:50.694 ************************************ 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:50.694 ************************************ 00:12:50.694 START TEST dd_invalid_skip 00:12:50.694 ************************************ 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1127 -- # invalid_skip 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:50.694 14:38:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:12:50.694 [2024-11-04 14:38:59.825803] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:50.694 [2024-11-04 14:38:59.825869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60988 ] 00:12:50.694 { 00:12:50.694 "subsystems": [ 00:12:50.694 { 00:12:50.694 "subsystem": "bdev", 00:12:50.694 "config": [ 00:12:50.694 { 00:12:50.694 "params": { 00:12:50.694 "block_size": 512, 00:12:50.694 "num_blocks": 512, 00:12:50.694 "name": "malloc0" 00:12:50.694 }, 00:12:50.694 "method": "bdev_malloc_create" 00:12:50.694 }, 00:12:50.694 { 00:12:50.694 "params": { 00:12:50.694 "block_size": 512, 00:12:50.694 "num_blocks": 512, 00:12:50.694 "name": "malloc1" 00:12:50.694 }, 00:12:50.694 "method": "bdev_malloc_create" 00:12:50.694 }, 00:12:50.694 { 00:12:50.694 "method": "bdev_wait_for_examine" 00:12:50.694 } 00:12:50.694 ] 00:12:50.694 } 00:12:50.694 ] 00:12:50.694 } 00:12:50.951 [2024-11-04 14:38:59.966821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.951 [2024-11-04 14:39:00.001620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.951 [2024-11-04 14:39:00.033404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:50.951 [2024-11-04 14:39:00.081675] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:12:50.951 [2024-11-04 14:39:00.081719] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:51.208 [2024-11-04 14:39:00.138956] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:51.208 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:51.209 00:12:51.209 real 0m0.391s 00:12:51.209 user 0m0.241s 00:12:51.209 sys 0m0.093s 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:12:51.209 ************************************ 00:12:51.209 END TEST dd_invalid_skip 00:12:51.209 ************************************ 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:51.209 ************************************ 00:12:51.209 START TEST dd_invalid_input_count 00:12:51.209 ************************************ 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1127 -- # invalid_input_count 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:51.209 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:12:51.209 { 00:12:51.209 "subsystems": [ 00:12:51.209 { 00:12:51.209 "subsystem": "bdev", 00:12:51.209 "config": [ 00:12:51.209 { 00:12:51.209 "params": { 00:12:51.209 "block_size": 512, 00:12:51.209 "num_blocks": 512, 00:12:51.209 "name": "malloc0" 00:12:51.209 }, 00:12:51.209 "method": "bdev_malloc_create" 00:12:51.209 }, 00:12:51.209 { 00:12:51.209 "params": { 00:12:51.209 "block_size": 512, 00:12:51.209 "num_blocks": 512, 00:12:51.209 "name": "malloc1" 00:12:51.209 }, 00:12:51.209 "method": "bdev_malloc_create" 00:12:51.209 }, 00:12:51.209 { 00:12:51.209 "method": "bdev_wait_for_examine" 00:12:51.209 } 00:12:51.209 ] 00:12:51.209 } 00:12:51.209 ] 00:12:51.209 } 00:12:51.209 [2024-11-04 14:39:00.260697] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:51.209 [2024-11-04 14:39:00.260753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61015 ] 00:12:51.466 [2024-11-04 14:39:00.399226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.466 [2024-11-04 14:39:00.434865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.466 [2024-11-04 14:39:00.465407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:51.466 [2024-11-04 14:39:00.516970] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:12:51.466 [2024-11-04 14:39:00.517016] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:51.466 [2024-11-04 14:39:00.577142] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:51.740 00:12:51.740 real 0m0.400s 00:12:51.740 user 0m0.237s 00:12:51.740 sys 0m0.099s 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:12:51.740 ************************************ 00:12:51.740 END TEST dd_invalid_input_count 00:12:51.740 ************************************ 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:51.740 ************************************ 00:12:51.740 START TEST dd_invalid_output_count 00:12:51.740 ************************************ 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1127 -- # invalid_output_count 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:51.740 14:39:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:12:51.740 { 00:12:51.740 "subsystems": [ 00:12:51.740 { 00:12:51.740 "subsystem": "bdev", 00:12:51.740 "config": [ 00:12:51.740 { 00:12:51.740 "params": { 00:12:51.740 "block_size": 512, 00:12:51.740 "num_blocks": 512, 00:12:51.740 "name": "malloc0" 00:12:51.740 }, 00:12:51.740 "method": "bdev_malloc_create" 00:12:51.740 }, 00:12:51.740 { 00:12:51.740 "method": "bdev_wait_for_examine" 00:12:51.740 } 00:12:51.740 ] 00:12:51.740 } 00:12:51.740 ] 00:12:51.740 } 00:12:51.740 [2024-11-04 14:39:00.705231] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:51.740 [2024-11-04 14:39:00.705302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61056 ] 00:12:51.740 [2024-11-04 14:39:00.843057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.015 [2024-11-04 14:39:00.878727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.015 [2024-11-04 14:39:00.909849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:52.015 [2024-11-04 14:39:00.950385] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:12:52.015 [2024-11-04 14:39:00.950433] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:52.015 [2024-11-04 14:39:01.011784] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:52.015 14:39:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:12:52.015 14:39:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:52.015 14:39:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:12:52.015 14:39:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:12:52.015 14:39:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:52.016 00:12:52.016 real 0m0.394s 00:12:52.016 user 0m0.227s 00:12:52.016 sys 0m0.097s 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:12:52.016 ************************************ 00:12:52.016 END TEST dd_invalid_output_count 00:12:52.016 ************************************ 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:52.016 ************************************ 00:12:52.016 START TEST dd_bs_not_multiple 00:12:52.016 ************************************ 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1127 -- # bs_not_multiple 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:52.016 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:12:52.016 { 00:12:52.016 "subsystems": [ 00:12:52.016 { 00:12:52.016 "subsystem": "bdev", 00:12:52.016 "config": [ 00:12:52.016 { 00:12:52.016 "params": { 00:12:52.016 "block_size": 512, 00:12:52.016 "num_blocks": 512, 00:12:52.016 "name": "malloc0" 00:12:52.016 }, 00:12:52.016 "method": "bdev_malloc_create" 00:12:52.016 }, 00:12:52.016 { 00:12:52.016 "params": { 00:12:52.016 "block_size": 512, 00:12:52.016 "num_blocks": 512, 00:12:52.016 "name": "malloc1" 00:12:52.016 }, 00:12:52.016 "method": "bdev_malloc_create" 00:12:52.016 }, 00:12:52.016 { 00:12:52.016 "method": "bdev_wait_for_examine" 00:12:52.016 } 00:12:52.016 ] 00:12:52.016 } 00:12:52.016 ] 00:12:52.016 } 00:12:52.016 [2024-11-04 14:39:01.136827] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:52.016 [2024-11-04 14:39:01.136891] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61081 ] 00:12:52.273 [2024-11-04 14:39:01.275728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.273 [2024-11-04 14:39:01.312537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.273 [2024-11-04 14:39:01.342898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:52.273 [2024-11-04 14:39:01.392211] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:12:52.273 [2024-11-04 14:39:01.392260] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:52.531 [2024-11-04 14:39:01.449365] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:52.531 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:12:52.531 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:52.531 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:12:52.531 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:12:52.531 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:12:52.531 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:52.531 00:12:52.531 real 0m0.395s 00:12:52.531 user 0m0.239s 00:12:52.531 sys 0m0.090s 00:12:52.531 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:52.531 14:39:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:12:52.531 ************************************ 00:12:52.531 END TEST dd_bs_not_multiple 00:12:52.531 ************************************ 00:12:52.531 ************************************ 00:12:52.531 END TEST spdk_dd_negative 00:12:52.531 ************************************ 00:12:52.531 00:12:52.531 real 0m4.604s 00:12:52.531 user 0m2.271s 00:12:52.531 sys 0m1.710s 00:12:52.531 14:39:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:52.531 14:39:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:52.531 00:12:52.531 real 0m57.663s 00:12:52.531 user 0m36.001s 00:12:52.531 sys 0m22.872s 00:12:52.531 14:39:01 spdk_dd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:52.531 14:39:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:52.531 ************************************ 00:12:52.531 END TEST spdk_dd 00:12:52.531 ************************************ 00:12:52.531 14:39:01 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:12:52.531 14:39:01 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:12:52.531 14:39:01 -- spdk/autotest.sh@256 -- # timing_exit lib 00:12:52.531 14:39:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:52.531 14:39:01 -- common/autotest_common.sh@10 -- # set +x 00:12:52.531 14:39:01 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:12:52.531 14:39:01 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:12:52.531 14:39:01 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:12:52.531 14:39:01 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:12:52.531 14:39:01 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:12:52.531 14:39:01 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:12:52.531 14:39:01 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:52.531 14:39:01 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:52.531 14:39:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:52.531 14:39:01 -- common/autotest_common.sh@10 -- # set +x 00:12:52.531 ************************************ 00:12:52.531 START TEST nvmf_tcp 00:12:52.531 ************************************ 00:12:52.531 14:39:01 nvmf_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:52.789 * Looking for test storage... 00:12:52.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:52.789 14:39:01 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:52.789 14:39:01 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:12:52.789 14:39:01 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:52.789 14:39:01 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.789 14:39:01 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:12:52.789 14:39:01 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.789 14:39:01 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:52.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.789 --rc genhtml_branch_coverage=1 00:12:52.789 --rc genhtml_function_coverage=1 00:12:52.789 --rc genhtml_legend=1 00:12:52.789 --rc geninfo_all_blocks=1 00:12:52.789 --rc geninfo_unexecuted_blocks=1 00:12:52.789 00:12:52.789 ' 00:12:52.789 14:39:01 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:52.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.789 --rc genhtml_branch_coverage=1 00:12:52.789 --rc genhtml_function_coverage=1 00:12:52.789 --rc genhtml_legend=1 00:12:52.789 --rc geninfo_all_blocks=1 00:12:52.789 --rc geninfo_unexecuted_blocks=1 00:12:52.790 00:12:52.790 ' 00:12:52.790 14:39:01 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:52.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.790 --rc genhtml_branch_coverage=1 00:12:52.790 --rc genhtml_function_coverage=1 00:12:52.790 --rc genhtml_legend=1 00:12:52.790 --rc geninfo_all_blocks=1 00:12:52.790 --rc geninfo_unexecuted_blocks=1 00:12:52.790 00:12:52.790 ' 00:12:52.790 14:39:01 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:52.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.790 --rc genhtml_branch_coverage=1 00:12:52.790 --rc genhtml_function_coverage=1 00:12:52.790 --rc genhtml_legend=1 00:12:52.790 --rc geninfo_all_blocks=1 00:12:52.790 --rc geninfo_unexecuted_blocks=1 00:12:52.790 00:12:52.790 ' 00:12:52.790 14:39:01 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:12:52.790 14:39:01 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:12:52.790 14:39:01 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:12:52.790 14:39:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:52.790 14:39:01 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:52.790 14:39:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:52.790 ************************************ 00:12:52.790 START TEST nvmf_target_core 00:12:52.790 ************************************ 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:12:52.790 * Looking for test storage... 00:12:52.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:52.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.790 --rc genhtml_branch_coverage=1 00:12:52.790 --rc genhtml_function_coverage=1 00:12:52.790 --rc genhtml_legend=1 00:12:52.790 --rc geninfo_all_blocks=1 00:12:52.790 --rc geninfo_unexecuted_blocks=1 00:12:52.790 00:12:52.790 ' 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:52.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.790 --rc genhtml_branch_coverage=1 00:12:52.790 --rc genhtml_function_coverage=1 00:12:52.790 --rc genhtml_legend=1 00:12:52.790 --rc geninfo_all_blocks=1 00:12:52.790 --rc geninfo_unexecuted_blocks=1 00:12:52.790 00:12:52.790 ' 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:52.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.790 --rc genhtml_branch_coverage=1 00:12:52.790 --rc genhtml_function_coverage=1 00:12:52.790 --rc genhtml_legend=1 00:12:52.790 --rc geninfo_all_blocks=1 00:12:52.790 --rc geninfo_unexecuted_blocks=1 00:12:52.790 00:12:52.790 ' 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:52.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.790 --rc genhtml_branch_coverage=1 00:12:52.790 --rc genhtml_function_coverage=1 00:12:52.790 --rc genhtml_legend=1 00:12:52.790 --rc geninfo_all_blocks=1 00:12:52.790 --rc geninfo_unexecuted_blocks=1 00:12:52.790 00:12:52.790 ' 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:52.790 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:53.071 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:53.071 ************************************ 00:12:53.071 START TEST nvmf_host_management 00:12:53.071 ************************************ 00:12:53.071 14:39:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:53.071 * Looking for test storage... 00:12:53.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:53.071 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:53.071 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:12:53.071 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:53.071 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:53.071 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:53.071 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:53.071 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:53.071 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:12:53.071 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:12:53.071 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:12:53.071 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:12:53.071 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:12:53.071 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:12:53.071 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:12:53.071 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:53.071 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:12:53.071 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:53.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.072 --rc genhtml_branch_coverage=1 00:12:53.072 --rc genhtml_function_coverage=1 00:12:53.072 --rc genhtml_legend=1 00:12:53.072 --rc geninfo_all_blocks=1 00:12:53.072 --rc geninfo_unexecuted_blocks=1 00:12:53.072 00:12:53.072 ' 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:53.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.072 --rc genhtml_branch_coverage=1 00:12:53.072 --rc genhtml_function_coverage=1 00:12:53.072 --rc genhtml_legend=1 00:12:53.072 --rc geninfo_all_blocks=1 00:12:53.072 --rc geninfo_unexecuted_blocks=1 00:12:53.072 00:12:53.072 ' 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:53.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.072 --rc genhtml_branch_coverage=1 00:12:53.072 --rc genhtml_function_coverage=1 00:12:53.072 --rc genhtml_legend=1 00:12:53.072 --rc geninfo_all_blocks=1 00:12:53.072 --rc geninfo_unexecuted_blocks=1 00:12:53.072 00:12:53.072 ' 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:53.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.072 --rc genhtml_branch_coverage=1 00:12:53.072 --rc genhtml_function_coverage=1 00:12:53.072 --rc genhtml_legend=1 00:12:53.072 --rc geninfo_all_blocks=1 00:12:53.072 --rc geninfo_unexecuted_blocks=1 00:12:53.072 00:12:53.072 ' 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:53.072 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:53.072 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:53.073 Cannot find device "nvmf_init_br" 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:53.073 Cannot find device "nvmf_init_br2" 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:53.073 Cannot find device "nvmf_tgt_br" 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:53.073 Cannot find device "nvmf_tgt_br2" 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:53.073 Cannot find device "nvmf_init_br" 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:53.073 Cannot find device "nvmf_init_br2" 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:53.073 Cannot find device "nvmf_tgt_br" 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:53.073 Cannot find device "nvmf_tgt_br2" 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:53.073 Cannot find device "nvmf_br" 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:53.073 Cannot find device "nvmf_init_if" 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:53.073 Cannot find device "nvmf_init_if2" 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:12:53.073 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:53.331 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:53.331 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:12:53.331 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:53.331 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:53.331 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:12:53.331 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:53.331 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:53.332 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:53.332 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:12:53.332 00:12:53.332 --- 10.0.0.3 ping statistics --- 00:12:53.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.332 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:53.332 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:53.332 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.028 ms 00:12:53.332 00:12:53.332 --- 10.0.0.4 ping statistics --- 00:12:53.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.332 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:12:53.332 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:53.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:53.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:12:53.591 00:12:53.591 --- 10.0.0.1 ping statistics --- 00:12:53.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.591 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:53.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:12:53.591 00:12:53.591 --- 10.0.0.2 ping statistics --- 00:12:53.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.591 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=61411 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 61411 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 61411 ']' 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:53.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:53.591 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:53.591 [2024-11-04 14:39:02.538343] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:53.591 [2024-11-04 14:39:02.538405] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.591 [2024-11-04 14:39:02.672540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.591 [2024-11-04 14:39:02.712844] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.591 [2024-11-04 14:39:02.712896] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.591 [2024-11-04 14:39:02.712906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.591 [2024-11-04 14:39:02.712914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.591 [2024-11-04 14:39:02.712919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.591 [2024-11-04 14:39:02.716643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.591 [2024-11-04 14:39:02.716800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.591 [2024-11-04 14:39:02.716901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:53.591 [2024-11-04 14:39:02.716903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.849 [2024-11-04 14:39:02.753077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:53.849 [2024-11-04 14:39:02.834638] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:53.849 Malloc0 00:12:53.849 [2024-11-04 14:39:02.900965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=61463 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 61463 /var/tmp/bdevperf.sock 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 61463 ']' 00:12:53.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:53.849 { 00:12:53.849 "params": { 00:12:53.849 "name": "Nvme$subsystem", 00:12:53.849 "trtype": "$TEST_TRANSPORT", 00:12:53.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:53.849 "adrfam": "ipv4", 00:12:53.849 "trsvcid": "$NVMF_PORT", 00:12:53.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:53.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:53.849 "hdgst": ${hdgst:-false}, 00:12:53.849 "ddgst": ${ddgst:-false} 00:12:53.849 }, 00:12:53.849 "method": "bdev_nvme_attach_controller" 00:12:53.849 } 00:12:53.849 EOF 00:12:53.849 )") 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:12:53.849 14:39:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:53.849 "params": { 00:12:53.849 "name": "Nvme0", 00:12:53.849 "trtype": "tcp", 00:12:53.849 "traddr": "10.0.0.3", 00:12:53.849 "adrfam": "ipv4", 00:12:53.849 "trsvcid": "4420", 00:12:53.849 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:53.849 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:53.849 "hdgst": false, 00:12:53.849 "ddgst": false 00:12:53.849 }, 00:12:53.849 "method": "bdev_nvme_attach_controller" 00:12:53.849 }' 00:12:53.849 [2024-11-04 14:39:02.978130] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:53.850 [2024-11-04 14:39:02.978195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61463 ] 00:12:54.107 [2024-11-04 14:39:03.117976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.107 [2024-11-04 14:39:03.154344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.107 [2024-11-04 14:39:03.194767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:54.364 Running I/O for 10 seconds... 00:12:54.364 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:54.364 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:12:54.364 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:54.364 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.364 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.364 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.364 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:54.364 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:54.364 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:54.364 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:54.364 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:54.364 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:54.364 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:54.365 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:54.365 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:54.365 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.365 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.365 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:54.365 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.365 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:12:54.365 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:12:54.365 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.624 14:39:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:54.624 [2024-11-04 14:39:03.694334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.624 [2024-11-04 14:39:03.694381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.624 [2024-11-04 14:39:03.694398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.624 [2024-11-04 14:39:03.694404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.624 [2024-11-04 14:39:03.694412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.624 [2024-11-04 14:39:03.694418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.624 [2024-11-04 14:39:03.694426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.624 [2024-11-04 14:39:03.694431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.624 [2024-11-04 14:39:03.694439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.624 [2024-11-04 14:39:03.694444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.624 [2024-11-04 14:39:03.694452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.624 [2024-11-04 14:39:03.694458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.624 [2024-11-04 14:39:03.694465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.624 [2024-11-04 14:39:03.694471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.624 [2024-11-04 14:39:03.694478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.624 [2024-11-04 14:39:03.694484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.624 [2024-11-04 14:39:03.694491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.624 [2024-11-04 14:39:03.694497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.624 [2024-11-04 14:39:03.694504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.624 [2024-11-04 14:39:03.694509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.624 [2024-11-04 14:39:03.694517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.624 [2024-11-04 14:39:03.694522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.624 [2024-11-04 14:39:03.694529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.624 [2024-11-04 14:39:03.694535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.624 [2024-11-04 14:39:03.694542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.624 [2024-11-04 14:39:03.694547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.624 [2024-11-04 14:39:03.694555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.624 [2024-11-04 14:39:03.694560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.694993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.694998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.695005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.695011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.695020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.695026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.695034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.695039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.695047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.695053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.695061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.695066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.625 [2024-11-04 14:39:03.695074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.625 [2024-11-04 14:39:03.695080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.626 [2024-11-04 14:39:03.695088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.626 [2024-11-04 14:39:03.695093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.626 [2024-11-04 14:39:03.695101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.626 [2024-11-04 14:39:03.695107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.626 [2024-11-04 14:39:03.695114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.626 [2024-11-04 14:39:03.695120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.626 [2024-11-04 14:39:03.695127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.626 [2024-11-04 14:39:03.695133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.626 [2024-11-04 14:39:03.695140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.626 [2024-11-04 14:39:03.695146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.626 [2024-11-04 14:39:03.695153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.626 [2024-11-04 14:39:03.695159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.626 [2024-11-04 14:39:03.695166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.626 [2024-11-04 14:39:03.695172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.626 [2024-11-04 14:39:03.695179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.626 [2024-11-04 14:39:03.695185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.626 [2024-11-04 14:39:03.695193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.626 [2024-11-04 14:39:03.695199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.626 [2024-11-04 14:39:03.695206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.626 [2024-11-04 14:39:03.695211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.626 [2024-11-04 14:39:03.695218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.626 [2024-11-04 14:39:03.695224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.626 [2024-11-04 14:39:03.695233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:54.626 [2024-11-04 14:39:03.695239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.626 [2024-11-04 14:39:03.695246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14362d0 is same with the state(6) to be set 00:12:54.626 [2024-11-04 14:39:03.695364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.626 [2024-11-04 14:39:03.695380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.626 [2024-11-04 14:39:03.695387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.626 [2024-11-04 14:39:03.695392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.626 [2024-11-04 14:39:03.695399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.626 [2024-11-04 14:39:03.695405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.626 [2024-11-04 14:39:03.695411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.626 [2024-11-04 14:39:03.695416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.626 [2024-11-04 14:39:03.695423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143bce0 is same with the state(6) to be set 00:12:54.626 [2024-11-04 14:39:03.696545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:12:54.626 task offset: 90112 on job bdev=Nvme0n1 fails 00:12:54.626 00:12:54.626 Latency(us) 00:12:54.626 [2024-11-04T14:39:03.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.626 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:54.626 Job: Nvme0n1 ended in about 0.39 seconds with error 00:12:54.626 Verification LBA range: start 0x0 length 0x400 00:12:54.626 Nvme0n1 : 0.39 1802.74 112.67 163.89 0.00 31516.34 1562.78 31457.28 00:12:54.626 [2024-11-04T14:39:03.766Z] =================================================================================================================== 00:12:54.626 [2024-11-04T14:39:03.766Z] Total : 1802.74 112.67 163.89 0.00 31516.34 1562.78 31457.28 00:12:54.626 [2024-11-04 14:39:03.698592] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:54.626 [2024-11-04 14:39:03.698630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143bce0 (9): Bad file descriptor 00:12:54.626 [2024-11-04 14:39:03.702884] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:12:55.558 14:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 61463 00:12:55.558 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (61463) - No such process 00:12:55.558 14:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:55.558 14:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:55.558 14:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:55.558 14:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:55.558 14:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:12:55.558 14:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:12:55.558 14:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:55.558 14:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:55.558 { 00:12:55.558 "params": { 00:12:55.558 "name": "Nvme$subsystem", 00:12:55.558 "trtype": "$TEST_TRANSPORT", 00:12:55.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:55.558 "adrfam": "ipv4", 00:12:55.558 "trsvcid": "$NVMF_PORT", 00:12:55.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:55.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:55.558 "hdgst": ${hdgst:-false}, 00:12:55.558 "ddgst": ${ddgst:-false} 00:12:55.558 }, 00:12:55.558 "method": "bdev_nvme_attach_controller" 00:12:55.558 } 00:12:55.558 EOF 00:12:55.558 )") 00:12:55.816 14:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:12:55.816 14:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:12:55.817 14:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:12:55.817 14:39:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:55.817 "params": { 00:12:55.817 "name": "Nvme0", 00:12:55.817 "trtype": "tcp", 00:12:55.817 "traddr": "10.0.0.3", 00:12:55.817 "adrfam": "ipv4", 00:12:55.817 "trsvcid": "4420", 00:12:55.817 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:55.817 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:55.817 "hdgst": false, 00:12:55.817 "ddgst": false 00:12:55.817 }, 00:12:55.817 "method": "bdev_nvme_attach_controller" 00:12:55.817 }' 00:12:55.817 [2024-11-04 14:39:04.732137] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:55.817 [2024-11-04 14:39:04.732195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61503 ] 00:12:55.817 [2024-11-04 14:39:04.872996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.817 [2024-11-04 14:39:04.907382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.075 [2024-11-04 14:39:04.960429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:56.075 Running I/O for 1 seconds... 00:12:57.010 1920.00 IOPS, 120.00 MiB/s 00:12:57.010 Latency(us) 00:12:57.010 [2024-11-04T14:39:06.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.010 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:57.010 Verification LBA range: start 0x0 length 0x400 00:12:57.010 Nvme0n1 : 1.02 1951.32 121.96 0.00 0.00 32217.81 3150.77 30449.03 00:12:57.010 [2024-11-04T14:39:06.150Z] =================================================================================================================== 00:12:57.010 [2024-11-04T14:39:06.150Z] Total : 1951.32 121.96 0.00 0.00 32217.81 3150.77 30449.03 00:12:57.267 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:57.267 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:57.267 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:12:57.267 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:57.267 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:57.267 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:57.267 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:12:57.267 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:57.267 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:12:57.267 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.267 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:57.268 rmmod nvme_tcp 00:12:57.268 rmmod nvme_fabrics 00:12:57.268 rmmod nvme_keyring 00:12:57.268 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.268 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:12:57.268 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:12:57.268 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 61411 ']' 00:12:57.268 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 61411 00:12:57.268 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 61411 ']' 00:12:57.268 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 61411 00:12:57.268 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:12:57.268 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:57.268 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61411 00:12:57.268 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:57.268 killing process with pid 61411 00:12:57.268 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:57.268 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61411' 00:12:57.268 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 61411 00:12:57.268 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 61411 00:12:57.525 [2024-11-04 14:39:06.426472] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:57.525 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:57.525 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:57.525 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:57.525 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:12:57.525 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:12:57.525 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:57.525 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:12:57.525 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:57.525 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:57.525 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:57.525 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:57.525 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:57.526 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:57.526 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:57.526 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:57.526 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:57.526 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:57.526 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:57.526 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:57.526 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:57.526 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:57.526 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:57.785 00:12:57.785 real 0m4.765s 00:12:57.785 user 0m16.995s 00:12:57.785 sys 0m1.049s 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.785 ************************************ 00:12:57.785 END TEST nvmf_host_management 00:12:57.785 ************************************ 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:57.785 ************************************ 00:12:57.785 START TEST nvmf_lvol 00:12:57.785 ************************************ 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:57.785 * Looking for test storage... 00:12:57.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:57.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.785 --rc genhtml_branch_coverage=1 00:12:57.785 --rc genhtml_function_coverage=1 00:12:57.785 --rc genhtml_legend=1 00:12:57.785 --rc geninfo_all_blocks=1 00:12:57.785 --rc geninfo_unexecuted_blocks=1 00:12:57.785 00:12:57.785 ' 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:57.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.785 --rc genhtml_branch_coverage=1 00:12:57.785 --rc genhtml_function_coverage=1 00:12:57.785 --rc genhtml_legend=1 00:12:57.785 --rc geninfo_all_blocks=1 00:12:57.785 --rc geninfo_unexecuted_blocks=1 00:12:57.785 00:12:57.785 ' 00:12:57.785 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:57.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.786 --rc genhtml_branch_coverage=1 00:12:57.786 --rc genhtml_function_coverage=1 00:12:57.786 --rc genhtml_legend=1 00:12:57.786 --rc geninfo_all_blocks=1 00:12:57.786 --rc geninfo_unexecuted_blocks=1 00:12:57.786 00:12:57.786 ' 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:57.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.786 --rc genhtml_branch_coverage=1 00:12:57.786 --rc genhtml_function_coverage=1 00:12:57.786 --rc genhtml_legend=1 00:12:57.786 --rc geninfo_all_blocks=1 00:12:57.786 --rc geninfo_unexecuted_blocks=1 00:12:57.786 00:12:57.786 ' 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:57.786 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:57.786 Cannot find device "nvmf_init_br" 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:12:57.786 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:58.045 Cannot find device "nvmf_init_br2" 00:12:58.045 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:12:58.045 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:58.045 Cannot find device "nvmf_tgt_br" 00:12:58.045 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:12:58.045 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:58.045 Cannot find device "nvmf_tgt_br2" 00:12:58.045 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:12:58.045 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:58.045 Cannot find device "nvmf_init_br" 00:12:58.045 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:12:58.045 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:58.045 Cannot find device "nvmf_init_br2" 00:12:58.045 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:12:58.045 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:58.045 Cannot find device "nvmf_tgt_br" 00:12:58.045 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:12:58.045 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:58.045 Cannot find device "nvmf_tgt_br2" 00:12:58.045 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:12:58.045 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:58.045 Cannot find device "nvmf_br" 00:12:58.045 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:12:58.045 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:58.045 Cannot find device "nvmf_init_if" 00:12:58.045 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:12:58.045 14:39:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:58.045 Cannot find device "nvmf_init_if2" 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:58.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:58.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:58.045 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:58.046 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:58.046 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:12:58.046 00:12:58.046 --- 10.0.0.3 ping statistics --- 00:12:58.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.046 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:58.046 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:58.046 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:12:58.046 00:12:58.046 --- 10.0.0.4 ping statistics --- 00:12:58.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.046 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:58.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:12:58.046 00:12:58.046 --- 10.0.0.1 ping statistics --- 00:12:58.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.046 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:58.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:12:58.046 00:12:58.046 --- 10.0.0.2 ping statistics --- 00:12:58.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.046 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:58.046 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:58.303 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=61767 00:12:58.303 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 61767 00:12:58.303 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:58.303 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 61767 ']' 00:12:58.303 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.303 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:58.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.303 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.303 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:58.303 14:39:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:58.304 [2024-11-04 14:39:07.219903] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:12:58.304 [2024-11-04 14:39:07.219962] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.304 [2024-11-04 14:39:07.358269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:58.304 [2024-11-04 14:39:07.393633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.304 [2024-11-04 14:39:07.393680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.304 [2024-11-04 14:39:07.393686] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.304 [2024-11-04 14:39:07.393691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.304 [2024-11-04 14:39:07.393696] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.304 [2024-11-04 14:39:07.394361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.304 [2024-11-04 14:39:07.394474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.304 [2024-11-04 14:39:07.394479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.304 [2024-11-04 14:39:07.426603] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:59.237 14:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:59.237 14:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:12:59.237 14:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:59.237 14:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:59.237 14:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:59.237 14:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.237 14:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:59.237 [2024-11-04 14:39:08.311938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.237 14:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:59.495 14:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:59.495 14:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:59.788 14:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:59.788 14:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:00.047 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:00.305 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=60771275-ed3a-4579-a6a0-64a0e7f4f9a3 00:13:00.305 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 60771275-ed3a-4579-a6a0-64a0e7f4f9a3 lvol 20 00:13:00.564 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=056fa3b3-dd73-4e43-8344-ed9dfec5bb2a 00:13:00.564 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:00.564 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 056fa3b3-dd73-4e43-8344-ed9dfec5bb2a 00:13:00.823 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:13:01.082 [2024-11-04 14:39:10.066209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:01.082 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:01.340 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=61839 00:13:01.340 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:01.340 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:02.273 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 056fa3b3-dd73-4e43-8344-ed9dfec5bb2a MY_SNAPSHOT 00:13:02.531 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e3d1d838-4bb0-414c-838a-af98d9435040 00:13:02.531 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 056fa3b3-dd73-4e43-8344-ed9dfec5bb2a 30 00:13:02.789 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone e3d1d838-4bb0-414c-838a-af98d9435040 MY_CLONE 00:13:03.046 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3534b1a0-a226-447a-a195-f0950ca7a611 00:13:03.046 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 3534b1a0-a226-447a-a195-f0950ca7a611 00:13:03.303 14:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 61839 00:13:11.446 Initializing NVMe Controllers 00:13:11.446 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:13:11.447 Controller IO queue size 128, less than required. 00:13:11.447 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:11.447 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:11.447 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:11.447 Initialization complete. Launching workers. 00:13:11.447 ======================================================== 00:13:11.447 Latency(us) 00:13:11.447 Device Information : IOPS MiB/s Average min max 00:13:11.447 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15559.80 60.78 8226.73 1521.39 37905.38 00:13:11.447 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15366.90 60.03 8330.16 247.72 63982.87 00:13:11.447 ======================================================== 00:13:11.447 Total : 30926.70 120.81 8278.12 247.72 63982.87 00:13:11.447 00:13:11.447 14:39:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:11.704 14:39:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 056fa3b3-dd73-4e43-8344-ed9dfec5bb2a 00:13:11.962 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 60771275-ed3a-4579-a6a0-64a0e7f4f9a3 00:13:12.220 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:12.220 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:12.220 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:12.220 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:12.220 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:12.480 rmmod nvme_tcp 00:13:12.480 rmmod nvme_fabrics 00:13:12.480 rmmod nvme_keyring 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 61767 ']' 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 61767 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 61767 ']' 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 61767 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61767 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:12.480 killing process with pid 61767 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61767' 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 61767 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 61767 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:12.480 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:13:12.739 00:13:12.739 real 0m15.050s 00:13:12.739 user 1m3.209s 00:13:12.739 sys 0m3.518s 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:12.739 ************************************ 00:13:12.739 END TEST nvmf_lvol 00:13:12.739 ************************************ 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:12.739 ************************************ 00:13:12.739 START TEST nvmf_lvs_grow 00:13:12.739 ************************************ 00:13:12.739 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:12.998 * Looking for test storage... 00:13:12.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:12.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.998 --rc genhtml_branch_coverage=1 00:13:12.998 --rc genhtml_function_coverage=1 00:13:12.998 --rc genhtml_legend=1 00:13:12.998 --rc geninfo_all_blocks=1 00:13:12.998 --rc geninfo_unexecuted_blocks=1 00:13:12.998 00:13:12.998 ' 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:12.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.998 --rc genhtml_branch_coverage=1 00:13:12.998 --rc genhtml_function_coverage=1 00:13:12.998 --rc genhtml_legend=1 00:13:12.998 --rc geninfo_all_blocks=1 00:13:12.998 --rc geninfo_unexecuted_blocks=1 00:13:12.998 00:13:12.998 ' 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:12.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.998 --rc genhtml_branch_coverage=1 00:13:12.998 --rc genhtml_function_coverage=1 00:13:12.998 --rc genhtml_legend=1 00:13:12.998 --rc geninfo_all_blocks=1 00:13:12.998 --rc geninfo_unexecuted_blocks=1 00:13:12.998 00:13:12.998 ' 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:12.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.998 --rc genhtml_branch_coverage=1 00:13:12.998 --rc genhtml_function_coverage=1 00:13:12.998 --rc genhtml_legend=1 00:13:12.998 --rc geninfo_all_blocks=1 00:13:12.998 --rc geninfo_unexecuted_blocks=1 00:13:12.998 00:13:12.998 ' 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:12.998 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:12.999 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.999 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.999 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.999 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:12.999 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.999 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:13:12.999 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:12.999 14:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:12.999 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:12.999 Cannot find device "nvmf_init_br" 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:12.999 Cannot find device "nvmf_init_br2" 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:12.999 Cannot find device "nvmf_tgt_br" 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:12.999 Cannot find device "nvmf_tgt_br2" 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:12.999 Cannot find device "nvmf_init_br" 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:12.999 Cannot find device "nvmf_init_br2" 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:12.999 Cannot find device "nvmf_tgt_br" 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:12.999 Cannot find device "nvmf_tgt_br2" 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:12.999 Cannot find device "nvmf_br" 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:12.999 Cannot find device "nvmf_init_if" 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:12.999 Cannot find device "nvmf_init_if2" 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:12.999 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:12.999 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:12.999 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:13.257 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:13.257 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:13.257 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:13.257 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:13.257 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:13.257 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:13.257 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:13.258 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:13.258 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:13:13.258 00:13:13.258 --- 10.0.0.3 ping statistics --- 00:13:13.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.258 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:13.258 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:13.258 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:13:13.258 00:13:13.258 --- 10.0.0.4 ping statistics --- 00:13:13.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.258 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:13.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:13:13.258 00:13:13.258 --- 10.0.0.1 ping statistics --- 00:13:13.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.258 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:13.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:13:13.258 00:13:13.258 --- 10.0.0.2 ping statistics --- 00:13:13.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.258 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:13.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=62226 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 62226 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 62226 ']' 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:13.258 14:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:13.258 [2024-11-04 14:39:22.331185] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:13:13.258 [2024-11-04 14:39:22.331242] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.516 [2024-11-04 14:39:22.469713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.516 [2024-11-04 14:39:22.508010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.516 [2024-11-04 14:39:22.508232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.516 [2024-11-04 14:39:22.508243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.516 [2024-11-04 14:39:22.508248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.517 [2024-11-04 14:39:22.508253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.517 [2024-11-04 14:39:22.508538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.517 [2024-11-04 14:39:22.540446] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:14.082 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:14.082 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:13:14.082 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:14.082 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:14.082 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:14.082 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.082 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:14.339 [2024-11-04 14:39:23.390286] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.339 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:14.339 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:14.339 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:14.339 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:14.339 ************************************ 00:13:14.339 START TEST lvs_grow_clean 00:13:14.339 ************************************ 00:13:14.339 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:13:14.339 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:14.339 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:14.339 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:14.339 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:14.339 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:14.339 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:14.339 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:14.339 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:14.339 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:14.597 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:14.597 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:14.856 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c0b52bca-09df-499c-9373-5cfaeb22cd09 00:13:14.856 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b52bca-09df-499c-9373-5cfaeb22cd09 00:13:14.856 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:14.856 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:14.856 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:14.856 14:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0b52bca-09df-499c-9373-5cfaeb22cd09 lvol 150 00:13:15.114 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e26b6072-765a-459c-a82d-8502224d2274 00:13:15.114 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:15.114 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:15.372 [2024-11-04 14:39:24.349416] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:15.372 [2024-11-04 14:39:24.349481] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:15.372 true 00:13:15.372 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b52bca-09df-499c-9373-5cfaeb22cd09 00:13:15.372 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:15.630 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:15.630 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:15.887 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e26b6072-765a-459c-a82d-8502224d2274 00:13:15.887 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:13:16.144 [2024-11-04 14:39:25.173895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:16.144 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:16.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:16.402 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=62303 00:13:16.402 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:16.402 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:16.402 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 62303 /var/tmp/bdevperf.sock 00:13:16.402 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 62303 ']' 00:13:16.402 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:16.402 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:16.402 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:16.402 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:16.402 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:16.402 [2024-11-04 14:39:25.439047] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:13:16.402 [2024-11-04 14:39:25.439391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62303 ] 00:13:16.659 [2024-11-04 14:39:25.578672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.659 [2024-11-04 14:39:25.616010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.659 [2024-11-04 14:39:25.648979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:17.224 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:17.224 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:13:17.224 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:17.482 Nvme0n1 00:13:17.740 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:17.740 [ 00:13:17.740 { 00:13:17.740 "name": "Nvme0n1", 00:13:17.740 "aliases": [ 00:13:17.740 "e26b6072-765a-459c-a82d-8502224d2274" 00:13:17.740 ], 00:13:17.740 "product_name": "NVMe disk", 00:13:17.740 "block_size": 4096, 00:13:17.740 "num_blocks": 38912, 00:13:17.740 "uuid": "e26b6072-765a-459c-a82d-8502224d2274", 00:13:17.740 "numa_id": -1, 00:13:17.740 "assigned_rate_limits": { 00:13:17.740 "rw_ios_per_sec": 0, 00:13:17.740 "rw_mbytes_per_sec": 0, 00:13:17.740 "r_mbytes_per_sec": 0, 00:13:17.740 "w_mbytes_per_sec": 0 00:13:17.740 }, 00:13:17.740 "claimed": false, 00:13:17.740 "zoned": false, 00:13:17.740 "supported_io_types": { 00:13:17.740 "read": true, 00:13:17.740 "write": true, 00:13:17.740 "unmap": true, 00:13:17.740 "flush": true, 00:13:17.740 "reset": true, 00:13:17.740 "nvme_admin": true, 00:13:17.740 "nvme_io": true, 00:13:17.740 "nvme_io_md": false, 00:13:17.740 "write_zeroes": true, 00:13:17.740 "zcopy": false, 00:13:17.740 "get_zone_info": false, 00:13:17.740 "zone_management": false, 00:13:17.740 "zone_append": false, 00:13:17.740 "compare": true, 00:13:17.740 "compare_and_write": true, 00:13:17.740 "abort": true, 00:13:17.740 "seek_hole": false, 00:13:17.740 "seek_data": false, 00:13:17.740 "copy": true, 00:13:17.740 "nvme_iov_md": false 00:13:17.740 }, 00:13:17.740 "memory_domains": [ 00:13:17.740 { 00:13:17.740 "dma_device_id": "system", 00:13:17.740 "dma_device_type": 1 00:13:17.740 } 00:13:17.740 ], 00:13:17.740 "driver_specific": { 00:13:17.740 "nvme": [ 00:13:17.740 { 00:13:17.740 "trid": { 00:13:17.740 "trtype": "TCP", 00:13:17.740 "adrfam": "IPv4", 00:13:17.740 "traddr": "10.0.0.3", 00:13:17.740 "trsvcid": "4420", 00:13:17.740 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:17.740 }, 00:13:17.740 "ctrlr_data": { 00:13:17.740 "cntlid": 1, 00:13:17.740 "vendor_id": "0x8086", 00:13:17.740 "model_number": "SPDK bdev Controller", 00:13:17.741 "serial_number": "SPDK0", 00:13:17.741 "firmware_revision": "25.01", 00:13:17.741 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:17.741 "oacs": { 00:13:17.741 "security": 0, 00:13:17.741 "format": 0, 00:13:17.741 "firmware": 0, 00:13:17.741 "ns_manage": 0 00:13:17.741 }, 00:13:17.741 "multi_ctrlr": true, 00:13:17.741 "ana_reporting": false 00:13:17.741 }, 00:13:17.741 "vs": { 00:13:17.741 "nvme_version": "1.3" 00:13:17.741 }, 00:13:17.741 "ns_data": { 00:13:17.741 "id": 1, 00:13:17.741 "can_share": true 00:13:17.741 } 00:13:17.741 } 00:13:17.741 ], 00:13:17.741 "mp_policy": "active_passive" 00:13:17.741 } 00:13:17.741 } 00:13:17.741 ] 00:13:17.741 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=62326 00:13:17.741 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:17.741 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:17.998 Running I/O for 10 seconds... 00:13:18.930 Latency(us) 00:13:18.930 [2024-11-04T14:39:28.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:18.931 Nvme0n1 : 1.00 10228.00 39.95 0.00 0.00 0.00 0.00 0.00 00:13:18.931 [2024-11-04T14:39:28.071Z] =================================================================================================================== 00:13:18.931 [2024-11-04T14:39:28.071Z] Total : 10228.00 39.95 0.00 0.00 0.00 0.00 0.00 00:13:18.931 00:13:19.865 14:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c0b52bca-09df-499c-9373-5cfaeb22cd09 00:13:19.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:19.865 Nvme0n1 : 2.00 10500.00 41.02 0.00 0.00 0.00 0.00 0.00 00:13:19.865 [2024-11-04T14:39:29.005Z] =================================================================================================================== 00:13:19.865 [2024-11-04T14:39:29.005Z] Total : 10500.00 41.02 0.00 0.00 0.00 0.00 0.00 00:13:19.865 00:13:20.123 true 00:13:20.123 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b52bca-09df-499c-9373-5cfaeb22cd09 00:13:20.123 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:20.380 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:20.380 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:20.380 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 62326 00:13:20.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:20.943 Nvme0n1 : 3.00 10592.00 41.38 0.00 0.00 0.00 0.00 0.00 00:13:20.943 [2024-11-04T14:39:30.083Z] =================================================================================================================== 00:13:20.943 [2024-11-04T14:39:30.083Z] Total : 10592.00 41.38 0.00 0.00 0.00 0.00 0.00 00:13:20.943 00:13:21.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:21.875 Nvme0n1 : 4.00 10291.00 40.20 0.00 0.00 0.00 0.00 0.00 00:13:21.875 [2024-11-04T14:39:31.015Z] =================================================================================================================== 00:13:21.875 [2024-11-04T14:39:31.015Z] Total : 10291.00 40.20 0.00 0.00 0.00 0.00 0.00 00:13:21.875 00:13:22.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:22.808 Nvme0n1 : 5.00 10315.60 40.30 0.00 0.00 0.00 0.00 0.00 00:13:22.808 [2024-11-04T14:39:31.948Z] =================================================================================================================== 00:13:22.808 [2024-11-04T14:39:31.948Z] Total : 10315.60 40.30 0.00 0.00 0.00 0.00 0.00 00:13:22.808 00:13:24.180 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:24.180 Nvme0n1 : 6.00 10395.50 40.61 0.00 0.00 0.00 0.00 0.00 00:13:24.180 [2024-11-04T14:39:33.320Z] =================================================================================================================== 00:13:24.180 [2024-11-04T14:39:33.320Z] Total : 10395.50 40.61 0.00 0.00 0.00 0.00 0.00 00:13:24.180 00:13:25.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:25.114 Nvme0n1 : 7.00 10380.00 40.55 0.00 0.00 0.00 0.00 0.00 00:13:25.114 [2024-11-04T14:39:34.254Z] =================================================================================================================== 00:13:25.114 [2024-11-04T14:39:34.254Z] Total : 10380.00 40.55 0.00 0.00 0.00 0.00 0.00 00:13:25.114 00:13:26.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:26.048 Nvme0n1 : 8.00 10416.00 40.69 0.00 0.00 0.00 0.00 0.00 00:13:26.048 [2024-11-04T14:39:35.188Z] =================================================================================================================== 00:13:26.048 [2024-11-04T14:39:35.188Z] Total : 10416.00 40.69 0.00 0.00 0.00 0.00 0.00 00:13:26.048 00:13:27.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:27.023 Nvme0n1 : 9.00 10217.33 39.91 0.00 0.00 0.00 0.00 0.00 00:13:27.023 [2024-11-04T14:39:36.163Z] =================================================================================================================== 00:13:27.023 [2024-11-04T14:39:36.163Z] Total : 10217.33 39.91 0.00 0.00 0.00 0.00 0.00 00:13:27.023 00:13:27.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:27.955 Nvme0n1 : 10.00 10008.40 39.10 0.00 0.00 0.00 0.00 0.00 00:13:27.955 [2024-11-04T14:39:37.095Z] =================================================================================================================== 00:13:27.955 [2024-11-04T14:39:37.095Z] Total : 10008.40 39.10 0.00 0.00 0.00 0.00 0.00 00:13:27.955 00:13:27.955 00:13:27.955 Latency(us) 00:13:27.955 [2024-11-04T14:39:37.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:27.955 Nvme0n1 : 10.01 10010.34 39.10 0.00 0.00 12782.66 4814.38 150027.03 00:13:27.955 [2024-11-04T14:39:37.095Z] =================================================================================================================== 00:13:27.955 [2024-11-04T14:39:37.095Z] Total : 10010.34 39.10 0.00 0.00 12782.66 4814.38 150027.03 00:13:27.955 { 00:13:27.955 "results": [ 00:13:27.955 { 00:13:27.955 "job": "Nvme0n1", 00:13:27.955 "core_mask": "0x2", 00:13:27.955 "workload": "randwrite", 00:13:27.955 "status": "finished", 00:13:27.955 "queue_depth": 128, 00:13:27.955 "io_size": 4096, 00:13:27.955 "runtime": 10.010846, 00:13:27.955 "iops": 10010.342782218406, 00:13:27.955 "mibps": 39.10290149304065, 00:13:27.955 "io_failed": 0, 00:13:27.955 "io_timeout": 0, 00:13:27.955 "avg_latency_us": 12782.66083386298, 00:13:27.955 "min_latency_us": 4814.375384615385, 00:13:27.955 "max_latency_us": 150027.0276923077 00:13:27.955 } 00:13:27.955 ], 00:13:27.955 "core_count": 1 00:13:27.955 } 00:13:27.955 14:39:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 62303 00:13:27.955 14:39:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 62303 ']' 00:13:27.955 14:39:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 62303 00:13:27.955 14:39:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:13:27.955 14:39:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:27.955 14:39:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62303 00:13:27.955 14:39:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:27.955 14:39:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:27.955 14:39:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62303' 00:13:27.955 killing process with pid 62303 00:13:27.955 14:39:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 62303 00:13:27.955 Received shutdown signal, test time was about 10.000000 seconds 00:13:27.955 00:13:27.955 Latency(us) 00:13:27.955 [2024-11-04T14:39:37.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.955 [2024-11-04T14:39:37.095Z] =================================================================================================================== 00:13:27.955 [2024-11-04T14:39:37.095Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:27.955 14:39:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 62303 00:13:28.213 14:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:28.213 14:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:28.471 14:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b52bca-09df-499c-9373-5cfaeb22cd09 00:13:28.471 14:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:28.729 14:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:28.729 14:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:28.729 14:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:28.986 [2024-11-04 14:39:38.011211] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:28.986 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b52bca-09df-499c-9373-5cfaeb22cd09 00:13:28.986 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:13:28.986 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b52bca-09df-499c-9373-5cfaeb22cd09 00:13:28.986 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:28.986 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:28.986 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:28.986 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:28.986 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:28.986 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:28.986 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:28.986 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:28.987 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b52bca-09df-499c-9373-5cfaeb22cd09 00:13:29.255 request: 00:13:29.255 { 00:13:29.255 "uuid": "c0b52bca-09df-499c-9373-5cfaeb22cd09", 00:13:29.255 "method": "bdev_lvol_get_lvstores", 00:13:29.255 "req_id": 1 00:13:29.255 } 00:13:29.255 Got JSON-RPC error response 00:13:29.255 response: 00:13:29.255 { 00:13:29.255 "code": -19, 00:13:29.255 "message": "No such device" 00:13:29.255 } 00:13:29.255 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:13:29.255 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:29.255 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:29.255 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:29.255 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:29.512 aio_bdev 00:13:29.512 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e26b6072-765a-459c-a82d-8502224d2274 00:13:29.512 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=e26b6072-765a-459c-a82d-8502224d2274 00:13:29.512 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:29.513 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:13:29.513 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:29.513 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:29.513 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:29.770 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e26b6072-765a-459c-a82d-8502224d2274 -t 2000 00:13:30.028 [ 00:13:30.028 { 00:13:30.028 "name": "e26b6072-765a-459c-a82d-8502224d2274", 00:13:30.028 "aliases": [ 00:13:30.028 "lvs/lvol" 00:13:30.028 ], 00:13:30.028 "product_name": "Logical Volume", 00:13:30.028 "block_size": 4096, 00:13:30.028 "num_blocks": 38912, 00:13:30.028 "uuid": "e26b6072-765a-459c-a82d-8502224d2274", 00:13:30.028 "assigned_rate_limits": { 00:13:30.028 "rw_ios_per_sec": 0, 00:13:30.028 "rw_mbytes_per_sec": 0, 00:13:30.028 "r_mbytes_per_sec": 0, 00:13:30.028 "w_mbytes_per_sec": 0 00:13:30.028 }, 00:13:30.028 "claimed": false, 00:13:30.028 "zoned": false, 00:13:30.028 "supported_io_types": { 00:13:30.028 "read": true, 00:13:30.028 "write": true, 00:13:30.028 "unmap": true, 00:13:30.028 "flush": false, 00:13:30.028 "reset": true, 00:13:30.028 "nvme_admin": false, 00:13:30.028 "nvme_io": false, 00:13:30.028 "nvme_io_md": false, 00:13:30.028 "write_zeroes": true, 00:13:30.028 "zcopy": false, 00:13:30.028 "get_zone_info": false, 00:13:30.028 "zone_management": false, 00:13:30.028 "zone_append": false, 00:13:30.028 "compare": false, 00:13:30.028 "compare_and_write": false, 00:13:30.028 "abort": false, 00:13:30.028 "seek_hole": true, 00:13:30.028 "seek_data": true, 00:13:30.028 "copy": false, 00:13:30.028 "nvme_iov_md": false 00:13:30.028 }, 00:13:30.028 "driver_specific": { 00:13:30.028 "lvol": { 00:13:30.028 "lvol_store_uuid": "c0b52bca-09df-499c-9373-5cfaeb22cd09", 00:13:30.028 "base_bdev": "aio_bdev", 00:13:30.028 "thin_provision": false, 00:13:30.028 "num_allocated_clusters": 38, 00:13:30.028 "snapshot": false, 00:13:30.028 "clone": false, 00:13:30.028 "esnap_clone": false 00:13:30.028 } 00:13:30.028 } 00:13:30.028 } 00:13:30.028 ] 00:13:30.028 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:13:30.028 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b52bca-09df-499c-9373-5cfaeb22cd09 00:13:30.028 14:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:30.285 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:30.285 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b52bca-09df-499c-9373-5cfaeb22cd09 00:13:30.286 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:30.543 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:30.543 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e26b6072-765a-459c-a82d-8502224d2274 00:13:30.800 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c0b52bca-09df-499c-9373-5cfaeb22cd09 00:13:30.800 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:31.057 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:31.622 ************************************ 00:13:31.622 END TEST lvs_grow_clean 00:13:31.622 ************************************ 00:13:31.622 00:13:31.622 real 0m17.214s 00:13:31.622 user 0m16.279s 00:13:31.622 sys 0m2.145s 00:13:31.622 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:31.622 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:31.622 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:31.622 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:31.622 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:31.622 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:31.622 ************************************ 00:13:31.622 START TEST lvs_grow_dirty 00:13:31.622 ************************************ 00:13:31.622 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:13:31.622 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:31.622 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:31.622 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:31.622 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:31.622 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:31.622 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:31.622 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:31.622 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:31.622 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:31.880 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:31.880 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:32.138 14:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=451ead4b-d633-4e0c-9c73-f4492ebfd331 00:13:32.138 14:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451ead4b-d633-4e0c-9c73-f4492ebfd331 00:13:32.138 14:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:32.397 14:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:32.397 14:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:32.397 14:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 451ead4b-d633-4e0c-9c73-f4492ebfd331 lvol 150 00:13:32.655 14:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d2fce6c2-38bb-439e-9030-ee394254999c 00:13:32.655 14:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:32.655 14:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:32.914 [2024-11-04 14:39:41.820711] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:32.914 [2024-11-04 14:39:41.820897] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:32.914 true 00:13:32.914 14:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:32.914 14:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451ead4b-d633-4e0c-9c73-f4492ebfd331 00:13:33.172 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:33.172 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:33.172 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d2fce6c2-38bb-439e-9030-ee394254999c 00:13:33.430 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:13:33.687 [2024-11-04 14:39:42.625036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:33.687 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:33.687 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=62568 00:13:33.687 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:33.687 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 62568 /var/tmp/bdevperf.sock 00:13:33.688 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 62568 ']' 00:13:33.688 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:33.688 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:33.688 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:33.688 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:33.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:33.688 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:33.688 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:33.953 [2024-11-04 14:39:42.855977] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:13:33.953 [2024-11-04 14:39:42.856211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62568 ] 00:13:33.953 [2024-11-04 14:39:42.996102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.953 [2024-11-04 14:39:43.039398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.953 [2024-11-04 14:39:43.074116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:34.898 14:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:34.898 14:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:13:34.898 14:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:34.898 Nvme0n1 00:13:34.898 14:39:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:35.158 [ 00:13:35.158 { 00:13:35.158 "name": "Nvme0n1", 00:13:35.158 "aliases": [ 00:13:35.158 "d2fce6c2-38bb-439e-9030-ee394254999c" 00:13:35.158 ], 00:13:35.158 "product_name": "NVMe disk", 00:13:35.158 "block_size": 4096, 00:13:35.158 "num_blocks": 38912, 00:13:35.158 "uuid": "d2fce6c2-38bb-439e-9030-ee394254999c", 00:13:35.158 "numa_id": -1, 00:13:35.158 "assigned_rate_limits": { 00:13:35.158 "rw_ios_per_sec": 0, 00:13:35.158 "rw_mbytes_per_sec": 0, 00:13:35.158 "r_mbytes_per_sec": 0, 00:13:35.158 "w_mbytes_per_sec": 0 00:13:35.158 }, 00:13:35.158 "claimed": false, 00:13:35.158 "zoned": false, 00:13:35.158 "supported_io_types": { 00:13:35.158 "read": true, 00:13:35.158 "write": true, 00:13:35.158 "unmap": true, 00:13:35.158 "flush": true, 00:13:35.158 "reset": true, 00:13:35.158 "nvme_admin": true, 00:13:35.158 "nvme_io": true, 00:13:35.158 "nvme_io_md": false, 00:13:35.158 "write_zeroes": true, 00:13:35.158 "zcopy": false, 00:13:35.158 "get_zone_info": false, 00:13:35.158 "zone_management": false, 00:13:35.158 "zone_append": false, 00:13:35.158 "compare": true, 00:13:35.158 "compare_and_write": true, 00:13:35.158 "abort": true, 00:13:35.158 "seek_hole": false, 00:13:35.158 "seek_data": false, 00:13:35.158 "copy": true, 00:13:35.158 "nvme_iov_md": false 00:13:35.158 }, 00:13:35.158 "memory_domains": [ 00:13:35.158 { 00:13:35.158 "dma_device_id": "system", 00:13:35.158 "dma_device_type": 1 00:13:35.158 } 00:13:35.158 ], 00:13:35.158 "driver_specific": { 00:13:35.158 "nvme": [ 00:13:35.158 { 00:13:35.158 "trid": { 00:13:35.158 "trtype": "TCP", 00:13:35.158 "adrfam": "IPv4", 00:13:35.158 "traddr": "10.0.0.3", 00:13:35.158 "trsvcid": "4420", 00:13:35.158 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:35.158 }, 00:13:35.158 "ctrlr_data": { 00:13:35.158 "cntlid": 1, 00:13:35.158 "vendor_id": "0x8086", 00:13:35.158 "model_number": "SPDK bdev Controller", 00:13:35.158 "serial_number": "SPDK0", 00:13:35.158 "firmware_revision": "25.01", 00:13:35.158 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:35.158 "oacs": { 00:13:35.158 "security": 0, 00:13:35.158 "format": 0, 00:13:35.158 "firmware": 0, 00:13:35.158 "ns_manage": 0 00:13:35.158 }, 00:13:35.158 "multi_ctrlr": true, 00:13:35.158 "ana_reporting": false 00:13:35.158 }, 00:13:35.158 "vs": { 00:13:35.158 "nvme_version": "1.3" 00:13:35.158 }, 00:13:35.158 "ns_data": { 00:13:35.158 "id": 1, 00:13:35.158 "can_share": true 00:13:35.158 } 00:13:35.158 } 00:13:35.158 ], 00:13:35.158 "mp_policy": "active_passive" 00:13:35.158 } 00:13:35.158 } 00:13:35.158 ] 00:13:35.158 14:39:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=62586 00:13:35.158 14:39:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:35.158 14:39:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:35.158 Running I/O for 10 seconds... 00:13:36.090 Latency(us) 00:13:36.090 [2024-11-04T14:39:45.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.090 Nvme0n1 : 1.00 11593.00 45.29 0.00 0.00 0.00 0.00 0.00 00:13:36.090 [2024-11-04T14:39:45.230Z] =================================================================================================================== 00:13:36.090 [2024-11-04T14:39:45.230Z] Total : 11593.00 45.29 0.00 0.00 0.00 0.00 0.00 00:13:36.090 00:13:37.022 14:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 451ead4b-d633-4e0c-9c73-f4492ebfd331 00:13:37.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.280 Nvme0n1 : 2.00 11312.00 44.19 0.00 0.00 0.00 0.00 0.00 00:13:37.280 [2024-11-04T14:39:46.420Z] =================================================================================================================== 00:13:37.280 [2024-11-04T14:39:46.420Z] Total : 11312.00 44.19 0.00 0.00 0.00 0.00 0.00 00:13:37.280 00:13:37.280 true 00:13:37.280 14:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:37.280 14:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451ead4b-d633-4e0c-9c73-f4492ebfd331 00:13:37.539 14:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:37.539 14:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:37.539 14:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 62586 00:13:38.104 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.104 Nvme0n1 : 3.00 10545.00 41.19 0.00 0.00 0.00 0.00 0.00 00:13:38.104 [2024-11-04T14:39:47.244Z] =================================================================================================================== 00:13:38.104 [2024-11-04T14:39:47.244Z] Total : 10545.00 41.19 0.00 0.00 0.00 0.00 0.00 00:13:38.104 00:13:39.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:39.484 Nvme0n1 : 4.00 9968.50 38.94 0.00 0.00 0.00 0.00 0.00 00:13:39.484 [2024-11-04T14:39:48.624Z] =================================================================================================================== 00:13:39.484 [2024-11-04T14:39:48.624Z] Total : 9968.50 38.94 0.00 0.00 0.00 0.00 0.00 00:13:39.484 00:13:40.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:40.416 Nvme0n1 : 5.00 9727.40 38.00 0.00 0.00 0.00 0.00 0.00 00:13:40.416 [2024-11-04T14:39:49.556Z] =================================================================================================================== 00:13:40.416 [2024-11-04T14:39:49.556Z] Total : 9727.40 38.00 0.00 0.00 0.00 0.00 0.00 00:13:40.416 00:13:41.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:41.347 Nvme0n1 : 6.00 9439.67 36.87 0.00 0.00 0.00 0.00 0.00 00:13:41.347 [2024-11-04T14:39:50.487Z] =================================================================================================================== 00:13:41.347 [2024-11-04T14:39:50.487Z] Total : 9439.67 36.87 0.00 0.00 0.00 0.00 0.00 00:13:41.347 00:13:42.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:42.310 Nvme0n1 : 7.00 9361.14 36.57 0.00 0.00 0.00 0.00 0.00 00:13:42.310 [2024-11-04T14:39:51.450Z] =================================================================================================================== 00:13:42.310 [2024-11-04T14:39:51.450Z] Total : 9361.14 36.57 0.00 0.00 0.00 0.00 0.00 00:13:42.310 00:13:43.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:43.297 Nvme0n1 : 8.00 9192.38 35.91 0.00 0.00 0.00 0.00 0.00 00:13:43.297 [2024-11-04T14:39:52.437Z] =================================================================================================================== 00:13:43.297 [2024-11-04T14:39:52.437Z] Total : 9192.38 35.91 0.00 0.00 0.00 0.00 0.00 00:13:43.297 00:13:44.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:44.230 Nvme0n1 : 9.00 8249.78 32.23 0.00 0.00 0.00 0.00 0.00 00:13:44.230 [2024-11-04T14:39:53.370Z] =================================================================================================================== 00:13:44.230 [2024-11-04T14:39:53.370Z] Total : 8249.78 32.23 0.00 0.00 0.00 0.00 0.00 00:13:44.230 00:13:45.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:45.163 Nvme0n1 : 10.00 7990.20 31.21 0.00 0.00 0.00 0.00 0.00 00:13:45.163 [2024-11-04T14:39:54.303Z] =================================================================================================================== 00:13:45.163 [2024-11-04T14:39:54.303Z] Total : 7990.20 31.21 0.00 0.00 0.00 0.00 0.00 00:13:45.163 00:13:45.163 00:13:45.163 Latency(us) 00:13:45.163 [2024-11-04T14:39:54.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:45.163 Nvme0n1 : 10.01 7993.41 31.22 0.00 0.00 16007.42 2369.38 942105.21 00:13:45.163 [2024-11-04T14:39:54.303Z] =================================================================================================================== 00:13:45.163 [2024-11-04T14:39:54.303Z] Total : 7993.41 31.22 0.00 0.00 16007.42 2369.38 942105.21 00:13:45.163 { 00:13:45.163 "results": [ 00:13:45.163 { 00:13:45.163 "job": "Nvme0n1", 00:13:45.163 "core_mask": "0x2", 00:13:45.163 "workload": "randwrite", 00:13:45.163 "status": "finished", 00:13:45.163 "queue_depth": 128, 00:13:45.163 "io_size": 4096, 00:13:45.163 "runtime": 10.012001, 00:13:45.163 "iops": 7993.407112124739, 00:13:45.163 "mibps": 31.22424653173726, 00:13:45.163 "io_failed": 0, 00:13:45.163 "io_timeout": 0, 00:13:45.163 "avg_latency_us": 16007.416974942089, 00:13:45.163 "min_latency_us": 2369.3784615384616, 00:13:45.163 "max_latency_us": 942105.2061538461 00:13:45.163 } 00:13:45.163 ], 00:13:45.163 "core_count": 1 00:13:45.163 } 00:13:45.163 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 62568 00:13:45.163 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 62568 ']' 00:13:45.163 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 62568 00:13:45.163 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:13:45.163 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:45.163 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62568 00:13:45.163 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:45.163 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:45.163 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62568' 00:13:45.163 killing process with pid 62568 00:13:45.163 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 62568 00:13:45.163 Received shutdown signal, test time was about 10.000000 seconds 00:13:45.163 00:13:45.163 Latency(us) 00:13:45.163 [2024-11-04T14:39:54.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.163 [2024-11-04T14:39:54.303Z] =================================================================================================================== 00:13:45.163 [2024-11-04T14:39:54.303Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:45.163 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 62568 00:13:45.422 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:45.691 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:45.691 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:45.691 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451ead4b-d633-4e0c-9c73-f4492ebfd331 00:13:45.949 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:45.949 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:45.949 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 62226 00:13:45.949 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 62226 00:13:45.949 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 62226 Killed "${NVMF_APP[@]}" "$@" 00:13:45.949 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:45.949 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:45.949 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:45.949 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:45.949 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:45.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.949 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=62724 00:13:45.949 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 62724 00:13:45.949 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 62724 ']' 00:13:45.949 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.949 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:45.949 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:45.949 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.949 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:45.949 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:46.207 [2024-11-04 14:39:55.096634] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:13:46.207 [2024-11-04 14:39:55.096864] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.207 [2024-11-04 14:39:55.238439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.207 [2024-11-04 14:39:55.272927] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.207 [2024-11-04 14:39:55.273128] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.207 [2024-11-04 14:39:55.273193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.207 [2024-11-04 14:39:55.273220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.207 [2024-11-04 14:39:55.273236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.207 [2024-11-04 14:39:55.273541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.207 [2024-11-04 14:39:55.304163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:47.144 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:47.144 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:13:47.144 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:47.144 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:47.144 14:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:47.144 14:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.144 14:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:47.144 [2024-11-04 14:39:56.207587] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:47.144 [2024-11-04 14:39:56.208136] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:47.144 [2024-11-04 14:39:56.208586] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:47.144 14:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:47.144 14:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d2fce6c2-38bb-439e-9030-ee394254999c 00:13:47.144 14:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=d2fce6c2-38bb-439e-9030-ee394254999c 00:13:47.144 14:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:47.144 14:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:13:47.144 14:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:47.144 14:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:47.144 14:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:47.402 14:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d2fce6c2-38bb-439e-9030-ee394254999c -t 2000 00:13:47.660 [ 00:13:47.660 { 00:13:47.660 "name": "d2fce6c2-38bb-439e-9030-ee394254999c", 00:13:47.660 "aliases": [ 00:13:47.660 "lvs/lvol" 00:13:47.660 ], 00:13:47.660 "product_name": "Logical Volume", 00:13:47.660 "block_size": 4096, 00:13:47.660 "num_blocks": 38912, 00:13:47.660 "uuid": "d2fce6c2-38bb-439e-9030-ee394254999c", 00:13:47.660 "assigned_rate_limits": { 00:13:47.660 "rw_ios_per_sec": 0, 00:13:47.660 "rw_mbytes_per_sec": 0, 00:13:47.660 "r_mbytes_per_sec": 0, 00:13:47.660 "w_mbytes_per_sec": 0 00:13:47.660 }, 00:13:47.660 "claimed": false, 00:13:47.660 "zoned": false, 00:13:47.660 "supported_io_types": { 00:13:47.660 "read": true, 00:13:47.660 "write": true, 00:13:47.660 "unmap": true, 00:13:47.660 "flush": false, 00:13:47.660 "reset": true, 00:13:47.660 "nvme_admin": false, 00:13:47.660 "nvme_io": false, 00:13:47.660 "nvme_io_md": false, 00:13:47.660 "write_zeroes": true, 00:13:47.660 "zcopy": false, 00:13:47.660 "get_zone_info": false, 00:13:47.660 "zone_management": false, 00:13:47.660 "zone_append": false, 00:13:47.660 "compare": false, 00:13:47.660 "compare_and_write": false, 00:13:47.660 "abort": false, 00:13:47.660 "seek_hole": true, 00:13:47.660 "seek_data": true, 00:13:47.660 "copy": false, 00:13:47.660 "nvme_iov_md": false 00:13:47.660 }, 00:13:47.660 "driver_specific": { 00:13:47.660 "lvol": { 00:13:47.660 "lvol_store_uuid": "451ead4b-d633-4e0c-9c73-f4492ebfd331", 00:13:47.660 "base_bdev": "aio_bdev", 00:13:47.660 "thin_provision": false, 00:13:47.660 "num_allocated_clusters": 38, 00:13:47.660 "snapshot": false, 00:13:47.660 "clone": false, 00:13:47.660 "esnap_clone": false 00:13:47.660 } 00:13:47.660 } 00:13:47.660 } 00:13:47.660 ] 00:13:47.660 14:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:13:47.660 14:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451ead4b-d633-4e0c-9c73-f4492ebfd331 00:13:47.660 14:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:47.920 14:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:47.920 14:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451ead4b-d633-4e0c-9c73-f4492ebfd331 00:13:47.920 14:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:48.179 14:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:48.179 14:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:48.179 [2024-11-04 14:39:57.313711] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:48.767 14:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451ead4b-d633-4e0c-9c73-f4492ebfd331 00:13:48.767 14:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:13:48.767 14:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451ead4b-d633-4e0c-9c73-f4492ebfd331 00:13:48.767 14:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.767 14:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.767 14:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.767 14:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.767 14:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.767 14:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.767 14:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.767 14:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:48.767 14:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451ead4b-d633-4e0c-9c73-f4492ebfd331 00:13:48.767 request: 00:13:48.767 { 00:13:48.767 "uuid": "451ead4b-d633-4e0c-9c73-f4492ebfd331", 00:13:48.767 "method": "bdev_lvol_get_lvstores", 00:13:48.767 "req_id": 1 00:13:48.767 } 00:13:48.767 Got JSON-RPC error response 00:13:48.767 response: 00:13:48.767 { 00:13:48.767 "code": -19, 00:13:48.767 "message": "No such device" 00:13:48.767 } 00:13:48.767 14:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:13:48.767 14:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:48.767 14:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:48.767 14:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:48.767 14:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:49.026 aio_bdev 00:13:49.026 14:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d2fce6c2-38bb-439e-9030-ee394254999c 00:13:49.026 14:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=d2fce6c2-38bb-439e-9030-ee394254999c 00:13:49.026 14:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:49.026 14:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:13:49.026 14:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:49.026 14:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:49.026 14:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:49.287 14:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d2fce6c2-38bb-439e-9030-ee394254999c -t 2000 00:13:49.287 [ 00:13:49.287 { 00:13:49.287 "name": "d2fce6c2-38bb-439e-9030-ee394254999c", 00:13:49.287 "aliases": [ 00:13:49.287 "lvs/lvol" 00:13:49.287 ], 00:13:49.287 "product_name": "Logical Volume", 00:13:49.287 "block_size": 4096, 00:13:49.287 "num_blocks": 38912, 00:13:49.287 "uuid": "d2fce6c2-38bb-439e-9030-ee394254999c", 00:13:49.287 "assigned_rate_limits": { 00:13:49.287 "rw_ios_per_sec": 0, 00:13:49.287 "rw_mbytes_per_sec": 0, 00:13:49.287 "r_mbytes_per_sec": 0, 00:13:49.287 "w_mbytes_per_sec": 0 00:13:49.287 }, 00:13:49.287 "claimed": false, 00:13:49.287 "zoned": false, 00:13:49.287 "supported_io_types": { 00:13:49.287 "read": true, 00:13:49.287 "write": true, 00:13:49.287 "unmap": true, 00:13:49.287 "flush": false, 00:13:49.287 "reset": true, 00:13:49.287 "nvme_admin": false, 00:13:49.287 "nvme_io": false, 00:13:49.287 "nvme_io_md": false, 00:13:49.287 "write_zeroes": true, 00:13:49.287 "zcopy": false, 00:13:49.287 "get_zone_info": false, 00:13:49.287 "zone_management": false, 00:13:49.287 "zone_append": false, 00:13:49.287 "compare": false, 00:13:49.287 "compare_and_write": false, 00:13:49.287 "abort": false, 00:13:49.287 "seek_hole": true, 00:13:49.287 "seek_data": true, 00:13:49.287 "copy": false, 00:13:49.287 "nvme_iov_md": false 00:13:49.287 }, 00:13:49.287 "driver_specific": { 00:13:49.287 "lvol": { 00:13:49.287 "lvol_store_uuid": "451ead4b-d633-4e0c-9c73-f4492ebfd331", 00:13:49.287 "base_bdev": "aio_bdev", 00:13:49.287 "thin_provision": false, 00:13:49.287 "num_allocated_clusters": 38, 00:13:49.287 "snapshot": false, 00:13:49.287 "clone": false, 00:13:49.287 "esnap_clone": false 00:13:49.287 } 00:13:49.287 } 00:13:49.287 } 00:13:49.287 ] 00:13:49.287 14:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:13:49.287 14:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451ead4b-d633-4e0c-9c73-f4492ebfd331 00:13:49.287 14:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:49.545 14:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:49.545 14:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451ead4b-d633-4e0c-9c73-f4492ebfd331 00:13:49.545 14:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:49.805 14:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:49.805 14:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d2fce6c2-38bb-439e-9030-ee394254999c 00:13:50.077 14:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 451ead4b-d633-4e0c-9c73-f4492ebfd331 00:13:50.077 14:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:50.335 14:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:50.593 00:13:50.593 real 0m19.006s 00:13:50.593 user 0m40.733s 00:13:50.593 sys 0m5.585s 00:13:50.593 14:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:50.593 ************************************ 00:13:50.593 END TEST lvs_grow_dirty 00:13:50.593 ************************************ 00:13:50.593 14:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:50.854 14:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:50.854 14:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:13:50.854 14:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:13:50.854 14:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:13:50.854 14:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:50.854 14:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:13:50.854 14:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:13:50.854 14:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:13:50.854 14:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:50.854 nvmf_trace.0 00:13:50.854 14:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:13:50.854 14:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:50.854 14:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:50.854 14:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:13:51.794 14:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:51.794 14:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:13:51.794 14:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:51.794 14:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:51.794 rmmod nvme_tcp 00:13:51.794 rmmod nvme_fabrics 00:13:51.794 rmmod nvme_keyring 00:13:51.794 14:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:51.794 14:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:13:51.794 14:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:13:51.794 14:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 62724 ']' 00:13:51.794 14:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 62724 00:13:51.794 14:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 62724 ']' 00:13:51.794 14:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 62724 00:13:51.794 14:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:13:51.794 14:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:51.794 14:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62724 00:13:52.052 14:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:52.052 14:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:52.052 14:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62724' 00:13:52.052 killing process with pid 62724 00:13:52.052 14:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 62724 00:13:52.052 14:40:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 62724 00:13:52.052 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:52.052 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:52.052 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:52.052 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:13:52.052 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:13:52.052 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:52.052 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:13:52.052 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:52.052 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:52.052 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:52.052 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:52.052 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:52.052 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:52.052 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:52.052 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:52.052 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:52.052 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:52.052 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:52.310 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:52.310 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:52.310 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:52.310 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:52.310 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:52.310 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.310 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.310 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.310 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:13:52.310 00:13:52.310 real 0m39.457s 00:13:52.310 user 1m3.497s 00:13:52.310 sys 0m8.815s 00:13:52.310 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:52.310 ************************************ 00:13:52.310 END TEST nvmf_lvs_grow 00:13:52.310 ************************************ 00:13:52.310 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:52.310 14:40:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:52.310 14:40:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:52.310 14:40:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:52.310 14:40:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:52.310 ************************************ 00:13:52.310 START TEST nvmf_bdev_io_wait 00:13:52.310 ************************************ 00:13:52.310 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:52.310 * Looking for test storage... 00:13:52.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:52.310 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:52.310 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:13:52.310 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:52.569 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:52.569 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:52.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.570 --rc genhtml_branch_coverage=1 00:13:52.570 --rc genhtml_function_coverage=1 00:13:52.570 --rc genhtml_legend=1 00:13:52.570 --rc geninfo_all_blocks=1 00:13:52.570 --rc geninfo_unexecuted_blocks=1 00:13:52.570 00:13:52.570 ' 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:52.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.570 --rc genhtml_branch_coverage=1 00:13:52.570 --rc genhtml_function_coverage=1 00:13:52.570 --rc genhtml_legend=1 00:13:52.570 --rc geninfo_all_blocks=1 00:13:52.570 --rc geninfo_unexecuted_blocks=1 00:13:52.570 00:13:52.570 ' 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:52.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.570 --rc genhtml_branch_coverage=1 00:13:52.570 --rc genhtml_function_coverage=1 00:13:52.570 --rc genhtml_legend=1 00:13:52.570 --rc geninfo_all_blocks=1 00:13:52.570 --rc geninfo_unexecuted_blocks=1 00:13:52.570 00:13:52.570 ' 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:52.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.570 --rc genhtml_branch_coverage=1 00:13:52.570 --rc genhtml_function_coverage=1 00:13:52.570 --rc genhtml_legend=1 00:13:52.570 --rc geninfo_all_blocks=1 00:13:52.570 --rc geninfo_unexecuted_blocks=1 00:13:52.570 00:13:52.570 ' 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:52.570 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:52.571 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:52.571 Cannot find device "nvmf_init_br" 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:52.571 Cannot find device "nvmf_init_br2" 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:52.571 Cannot find device "nvmf_tgt_br" 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:52.571 Cannot find device "nvmf_tgt_br2" 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:52.571 Cannot find device "nvmf_init_br" 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:52.571 Cannot find device "nvmf_init_br2" 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:52.571 Cannot find device "nvmf_tgt_br" 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:52.571 Cannot find device "nvmf_tgt_br2" 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:52.571 Cannot find device "nvmf_br" 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:52.571 Cannot find device "nvmf_init_if" 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:52.571 Cannot find device "nvmf_init_if2" 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:52.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:52.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:52.571 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:52.829 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:52.829 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:52.829 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:52.829 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:52.829 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:52.829 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:52.829 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:52.829 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:52.829 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:52.829 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:52.829 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:52.829 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:13:52.829 00:13:52.829 --- 10.0.0.3 ping statistics --- 00:13:52.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.829 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:52.829 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:52.829 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:52.829 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:13:52.829 00:13:52.829 --- 10.0.0.4 ping statistics --- 00:13:52.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.829 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:13:52.829 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:52.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:52.829 00:13:52.829 --- 10.0.0.1 ping statistics --- 00:13:52.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.829 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:52.829 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:52.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:13:52.829 00:13:52.829 --- 10.0.0.2 ping statistics --- 00:13:52.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.829 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:52.829 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.829 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:13:52.829 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:52.829 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.830 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:52.830 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:52.830 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.830 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:52.830 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:52.830 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:52.830 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:52.830 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:52.830 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:52.830 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=63097 00:13:52.830 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:52.830 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 63097 00:13:52.830 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 63097 ']' 00:13:52.830 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.830 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:52.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.830 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.830 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:52.830 14:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:52.830 [2024-11-04 14:40:01.820126] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:13:52.830 [2024-11-04 14:40:01.820198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.830 [2024-11-04 14:40:01.957428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:53.087 [2024-11-04 14:40:02.001591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.087 [2024-11-04 14:40:02.001650] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.087 [2024-11-04 14:40:02.001656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.087 [2024-11-04 14:40:02.001660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.087 [2024-11-04 14:40:02.001664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.087 [2024-11-04 14:40:02.002421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.087 [2024-11-04 14:40:02.002737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.087 [2024-11-04 14:40:02.003471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.087 [2024-11-04 14:40:02.003571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.682 [2024-11-04 14:40:02.802114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.682 [2024-11-04 14:40:02.813275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:53.682 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.941 Malloc0 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.941 [2024-11-04 14:40:02.860039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=63132 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=63134 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=63136 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:53.941 { 00:13:53.941 "params": { 00:13:53.941 "name": "Nvme$subsystem", 00:13:53.941 "trtype": "$TEST_TRANSPORT", 00:13:53.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:53.941 "adrfam": "ipv4", 00:13:53.941 "trsvcid": "$NVMF_PORT", 00:13:53.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:53.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:53.941 "hdgst": ${hdgst:-false}, 00:13:53.941 "ddgst": ${ddgst:-false} 00:13:53.941 }, 00:13:53.941 "method": "bdev_nvme_attach_controller" 00:13:53.941 } 00:13:53.941 EOF 00:13:53.941 )") 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:53.941 { 00:13:53.941 "params": { 00:13:53.941 "name": "Nvme$subsystem", 00:13:53.941 "trtype": "$TEST_TRANSPORT", 00:13:53.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:53.941 "adrfam": "ipv4", 00:13:53.941 "trsvcid": "$NVMF_PORT", 00:13:53.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:53.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:53.941 "hdgst": ${hdgst:-false}, 00:13:53.941 "ddgst": ${ddgst:-false} 00:13:53.941 }, 00:13:53.941 "method": "bdev_nvme_attach_controller" 00:13:53.941 } 00:13:53.941 EOF 00:13:53.941 )") 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=63138 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:53.941 { 00:13:53.941 "params": { 00:13:53.941 "name": "Nvme$subsystem", 00:13:53.941 "trtype": "$TEST_TRANSPORT", 00:13:53.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:53.941 "adrfam": "ipv4", 00:13:53.941 "trsvcid": "$NVMF_PORT", 00:13:53.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:53.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:53.941 "hdgst": ${hdgst:-false}, 00:13:53.941 "ddgst": ${ddgst:-false} 00:13:53.941 }, 00:13:53.941 "method": "bdev_nvme_attach_controller" 00:13:53.941 } 00:13:53.941 EOF 00:13:53.941 )") 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:53.941 "params": { 00:13:53.941 "name": "Nvme1", 00:13:53.941 "trtype": "tcp", 00:13:53.941 "traddr": "10.0.0.3", 00:13:53.941 "adrfam": "ipv4", 00:13:53.941 "trsvcid": "4420", 00:13:53.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:53.941 "hdgst": false, 00:13:53.941 "ddgst": false 00:13:53.941 }, 00:13:53.941 "method": "bdev_nvme_attach_controller" 00:13:53.941 }' 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:53.941 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:53.941 { 00:13:53.941 "params": { 00:13:53.941 "name": "Nvme$subsystem", 00:13:53.941 "trtype": "$TEST_TRANSPORT", 00:13:53.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:53.942 "adrfam": "ipv4", 00:13:53.942 "trsvcid": "$NVMF_PORT", 00:13:53.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:53.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:53.942 "hdgst": ${hdgst:-false}, 00:13:53.942 "ddgst": ${ddgst:-false} 00:13:53.942 }, 00:13:53.942 "method": "bdev_nvme_attach_controller" 00:13:53.942 } 00:13:53.942 EOF 00:13:53.942 )") 00:13:53.942 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:53.942 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:53.942 "params": { 00:13:53.942 "name": "Nvme1", 00:13:53.942 "trtype": "tcp", 00:13:53.942 "traddr": "10.0.0.3", 00:13:53.942 "adrfam": "ipv4", 00:13:53.942 "trsvcid": "4420", 00:13:53.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:53.942 "hdgst": false, 00:13:53.942 "ddgst": false 00:13:53.942 }, 00:13:53.942 "method": "bdev_nvme_attach_controller" 00:13:53.942 }' 00:13:53.942 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:53.942 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:53.942 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:53.942 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:53.942 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:53.942 "params": { 00:13:53.942 "name": "Nvme1", 00:13:53.942 "trtype": "tcp", 00:13:53.942 "traddr": "10.0.0.3", 00:13:53.942 "adrfam": "ipv4", 00:13:53.942 "trsvcid": "4420", 00:13:53.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:53.942 "hdgst": false, 00:13:53.942 "ddgst": false 00:13:53.942 }, 00:13:53.942 "method": "bdev_nvme_attach_controller" 00:13:53.942 }' 00:13:53.942 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:53.942 14:40:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:53.942 "params": { 00:13:53.942 "name": "Nvme1", 00:13:53.942 "trtype": "tcp", 00:13:53.942 "traddr": "10.0.0.3", 00:13:53.942 "adrfam": "ipv4", 00:13:53.942 "trsvcid": "4420", 00:13:53.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:53.942 "hdgst": false, 00:13:53.942 "ddgst": false 00:13:53.942 }, 00:13:53.942 "method": "bdev_nvme_attach_controller" 00:13:53.942 }' 00:13:53.942 [2024-11-04 14:40:02.905233] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:13:53.942 [2024-11-04 14:40:02.905820] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:53.942 [2024-11-04 14:40:02.924066] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:13:53.942 [2024-11-04 14:40:02.924136] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:53.942 [2024-11-04 14:40:02.925337] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:13:53.942 [2024-11-04 14:40:02.925417] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:53.942 [2024-11-04 14:40:02.926406] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:13:53.942 [2024-11-04 14:40:02.926463] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:54.201 [2024-11-04 14:40:03.092235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.201 14:40:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 63132 00:13:54.201 [2024-11-04 14:40:03.130408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:13:54.201 [2024-11-04 14:40:03.141905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.201 [2024-11-04 14:40:03.143181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:54.201 [2024-11-04 14:40:03.181224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:54.202 [2024-11-04 14:40:03.194424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:54.202 [2024-11-04 14:40:03.205198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.202 [2024-11-04 14:40:03.243105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:54.202 [2024-11-04 14:40:03.256490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:54.202 [2024-11-04 14:40:03.262306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.202 Running I/O for 1 seconds... 00:13:54.202 [2024-11-04 14:40:03.294395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:54.202 [2024-11-04 14:40:03.307089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:54.202 Running I/O for 1 seconds... 00:13:54.459 Running I/O for 1 seconds... 00:13:54.459 Running I/O for 1 seconds... 00:13:55.396 8608.00 IOPS, 33.62 MiB/s 00:13:55.396 Latency(us) 00:13:55.396 [2024-11-04T14:40:04.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.396 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:55.396 Nvme1n1 : 1.02 8618.72 33.67 0.00 0.00 14737.30 6150.30 27424.30 00:13:55.396 [2024-11-04T14:40:04.536Z] =================================================================================================================== 00:13:55.396 [2024-11-04T14:40:04.536Z] Total : 8618.72 33.67 0.00 0.00 14737.30 6150.30 27424.30 00:13:55.396 10501.00 IOPS, 41.02 MiB/s 00:13:55.396 Latency(us) 00:13:55.396 [2024-11-04T14:40:04.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.396 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:55.396 Nvme1n1 : 1.01 10552.72 41.22 0.00 0.00 12079.00 7108.14 24197.91 00:13:55.396 [2024-11-04T14:40:04.536Z] =================================================================================================================== 00:13:55.396 [2024-11-04T14:40:04.536Z] Total : 10552.72 41.22 0.00 0.00 12079.00 7108.14 24197.91 00:13:55.396 160624.00 IOPS, 627.44 MiB/s 00:13:55.396 Latency(us) 00:13:55.396 [2024-11-04T14:40:04.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.396 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:55.396 Nvme1n1 : 1.00 160312.82 626.22 0.00 0.00 794.33 351.31 1915.67 00:13:55.396 [2024-11-04T14:40:04.536Z] =================================================================================================================== 00:13:55.396 [2024-11-04T14:40:04.536Z] Total : 160312.82 626.22 0.00 0.00 794.33 351.31 1915.67 00:13:55.396 8274.00 IOPS, 32.32 MiB/s 00:13:55.396 Latency(us) 00:13:55.396 [2024-11-04T14:40:04.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.397 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:55.397 Nvme1n1 : 1.01 8363.61 32.67 0.00 0.00 15265.27 3629.69 32263.88 00:13:55.397 [2024-11-04T14:40:04.537Z] =================================================================================================================== 00:13:55.397 [2024-11-04T14:40:04.537Z] Total : 8363.61 32.67 0.00 0.00 15265.27 3629.69 32263.88 00:13:55.397 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 63134 00:13:55.397 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 63136 00:13:55.397 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 63138 00:13:55.397 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.397 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.397 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:55.657 rmmod nvme_tcp 00:13:55.657 rmmod nvme_fabrics 00:13:55.657 rmmod nvme_keyring 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 63097 ']' 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 63097 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 63097 ']' 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 63097 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63097 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:55.657 killing process with pid 63097 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63097' 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 63097 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 63097 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:55.657 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:55.940 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:55.940 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:55.940 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:55.940 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:55.940 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:55.940 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:55.940 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:55.940 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:55.940 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:55.940 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:55.940 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:55.940 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:55.940 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.940 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:55.940 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.940 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:13:55.940 00:13:55.940 real 0m3.652s 00:13:55.940 user 0m15.654s 00:13:55.940 sys 0m1.677s 00:13:55.940 ************************************ 00:13:55.940 END TEST nvmf_bdev_io_wait 00:13:55.940 ************************************ 00:13:55.940 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:55.940 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:55.940 14:40:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:55.940 14:40:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:55.940 14:40:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:55.940 14:40:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:55.940 ************************************ 00:13:55.940 START TEST nvmf_queue_depth 00:13:55.940 ************************************ 00:13:55.940 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:56.202 * Looking for test storage... 00:13:56.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:56.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.202 --rc genhtml_branch_coverage=1 00:13:56.202 --rc genhtml_function_coverage=1 00:13:56.202 --rc genhtml_legend=1 00:13:56.202 --rc geninfo_all_blocks=1 00:13:56.202 --rc geninfo_unexecuted_blocks=1 00:13:56.202 00:13:56.202 ' 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:56.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.202 --rc genhtml_branch_coverage=1 00:13:56.202 --rc genhtml_function_coverage=1 00:13:56.202 --rc genhtml_legend=1 00:13:56.202 --rc geninfo_all_blocks=1 00:13:56.202 --rc geninfo_unexecuted_blocks=1 00:13:56.202 00:13:56.202 ' 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:56.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.202 --rc genhtml_branch_coverage=1 00:13:56.202 --rc genhtml_function_coverage=1 00:13:56.202 --rc genhtml_legend=1 00:13:56.202 --rc geninfo_all_blocks=1 00:13:56.202 --rc geninfo_unexecuted_blocks=1 00:13:56.202 00:13:56.202 ' 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:56.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.202 --rc genhtml_branch_coverage=1 00:13:56.202 --rc genhtml_function_coverage=1 00:13:56.202 --rc genhtml_legend=1 00:13:56.202 --rc geninfo_all_blocks=1 00:13:56.202 --rc geninfo_unexecuted_blocks=1 00:13:56.202 00:13:56.202 ' 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.202 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:56.203 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:56.203 Cannot find device "nvmf_init_br" 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:56.203 Cannot find device "nvmf_init_br2" 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:56.203 Cannot find device "nvmf_tgt_br" 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:56.203 Cannot find device "nvmf_tgt_br2" 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:56.203 Cannot find device "nvmf_init_br" 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:56.203 Cannot find device "nvmf_init_br2" 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:56.203 Cannot find device "nvmf_tgt_br" 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:56.203 Cannot find device "nvmf_tgt_br2" 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:56.203 Cannot find device "nvmf_br" 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:56.203 Cannot find device "nvmf_init_if" 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:13:56.203 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:56.203 Cannot find device "nvmf_init_if2" 00:13:56.204 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:13:56.204 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:56.204 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:56.204 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:13:56.204 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:56.204 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:56.204 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:13:56.204 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:56.204 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:56.204 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:56.204 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:56.467 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:56.467 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:13:56.467 00:13:56.467 --- 10.0.0.3 ping statistics --- 00:13:56.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.467 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:56.467 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:56.467 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:13:56.467 00:13:56.467 --- 10.0.0.4 ping statistics --- 00:13:56.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.467 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:56.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:56.467 00:13:56.467 --- 10.0.0.1 ping statistics --- 00:13:56.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.467 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:56.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:13:56.467 00:13:56.467 --- 10.0.0.2 ping statistics --- 00:13:56.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.467 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:56.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=63394 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 63394 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 63394 ']' 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:56.467 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:56.467 [2024-11-04 14:40:05.586061] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:13:56.467 [2024-11-04 14:40:05.586129] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.728 [2024-11-04 14:40:05.727576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.728 [2024-11-04 14:40:05.772429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.728 [2024-11-04 14:40:05.772481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.728 [2024-11-04 14:40:05.772487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.728 [2024-11-04 14:40:05.772492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.728 [2024-11-04 14:40:05.772497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.728 [2024-11-04 14:40:05.772852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.728 [2024-11-04 14:40:05.816487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:57.667 [2024-11-04 14:40:06.511460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:57.667 Malloc0 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.667 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:57.667 [2024-11-04 14:40:06.555267] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:57.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:57.668 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.668 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=63426 00:13:57.668 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:57.668 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 63426 /var/tmp/bdevperf.sock 00:13:57.668 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 63426 ']' 00:13:57.668 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:57.668 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:57.668 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:57.668 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:57.668 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:57.668 14:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:57.668 [2024-11-04 14:40:06.596552] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:13:57.668 [2024-11-04 14:40:06.596656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63426 ] 00:13:57.668 [2024-11-04 14:40:06.736838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.668 [2024-11-04 14:40:06.781161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.927 [2024-11-04 14:40:06.823716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:58.496 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:58.496 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:13:58.496 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:58.496 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.496 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:58.496 NVMe0n1 00:13:58.496 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.496 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:58.755 Running I/O for 10 seconds... 00:14:00.628 6961.00 IOPS, 27.19 MiB/s [2024-11-04T14:40:10.722Z] 7534.00 IOPS, 29.43 MiB/s [2024-11-04T14:40:11.666Z] 7793.33 IOPS, 30.44 MiB/s [2024-11-04T14:40:13.066Z] 7902.75 IOPS, 30.87 MiB/s [2024-11-04T14:40:14.001Z] 8004.60 IOPS, 31.27 MiB/s [2024-11-04T14:40:14.940Z] 8055.17 IOPS, 31.47 MiB/s [2024-11-04T14:40:15.879Z] 8078.00 IOPS, 31.55 MiB/s [2024-11-04T14:40:16.818Z] 8143.25 IOPS, 31.81 MiB/s [2024-11-04T14:40:17.761Z] 8214.00 IOPS, 32.09 MiB/s [2024-11-04T14:40:17.761Z] 8245.20 IOPS, 32.21 MiB/s 00:14:08.621 Latency(us) 00:14:08.621 [2024-11-04T14:40:17.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.621 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:08.621 Verification LBA range: start 0x0 length 0x4000 00:14:08.621 NVMe0n1 : 10.07 8286.00 32.37 0.00 0.00 123000.67 15426.17 100421.32 00:14:08.621 [2024-11-04T14:40:17.761Z] =================================================================================================================== 00:14:08.621 [2024-11-04T14:40:17.761Z] Total : 8286.00 32.37 0.00 0.00 123000.67 15426.17 100421.32 00:14:08.621 { 00:14:08.621 "results": [ 00:14:08.621 { 00:14:08.621 "job": "NVMe0n1", 00:14:08.621 "core_mask": "0x1", 00:14:08.621 "workload": "verify", 00:14:08.621 "status": "finished", 00:14:08.621 "verify_range": { 00:14:08.621 "start": 0, 00:14:08.621 "length": 16384 00:14:08.621 }, 00:14:08.621 "queue_depth": 1024, 00:14:08.621 "io_size": 4096, 00:14:08.621 "runtime": 10.074344, 00:14:08.621 "iops": 8285.998572214727, 00:14:08.621 "mibps": 32.367181922713776, 00:14:08.621 "io_failed": 0, 00:14:08.621 "io_timeout": 0, 00:14:08.621 "avg_latency_us": 123000.66559910358, 00:14:08.621 "min_latency_us": 15426.166153846154, 00:14:08.621 "max_latency_us": 100421.31692307693 00:14:08.621 } 00:14:08.621 ], 00:14:08.621 "core_count": 1 00:14:08.621 } 00:14:08.621 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 63426 00:14:08.621 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 63426 ']' 00:14:08.621 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 63426 00:14:08.621 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:14:08.621 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:08.621 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63426 00:14:08.883 killing process with pid 63426 00:14:08.883 Received shutdown signal, test time was about 10.000000 seconds 00:14:08.883 00:14:08.883 Latency(us) 00:14:08.883 [2024-11-04T14:40:18.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.883 [2024-11-04T14:40:18.023Z] =================================================================================================================== 00:14:08.883 [2024-11-04T14:40:18.023Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:08.883 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:08.883 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:08.883 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63426' 00:14:08.883 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 63426 00:14:08.883 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 63426 00:14:08.883 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:08.883 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:08.883 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:08.883 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:14:09.144 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:09.144 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:14:09.144 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:09.144 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:09.144 rmmod nvme_tcp 00:14:09.144 rmmod nvme_fabrics 00:14:09.144 rmmod nvme_keyring 00:14:09.144 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:09.144 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:14:09.144 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:14:09.144 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 63394 ']' 00:14:09.144 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 63394 00:14:09.144 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 63394 ']' 00:14:09.144 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 63394 00:14:09.144 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:14:09.144 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:09.144 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63394 00:14:09.144 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:09.144 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:09.144 killing process with pid 63394 00:14:09.144 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63394' 00:14:09.144 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 63394 00:14:09.144 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 63394 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.406 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.669 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:14:09.669 00:14:09.669 real 0m13.537s 00:14:09.669 user 0m23.273s 00:14:09.669 sys 0m1.880s 00:14:09.669 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:09.669 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:09.669 ************************************ 00:14:09.670 END TEST nvmf_queue_depth 00:14:09.670 ************************************ 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:09.670 ************************************ 00:14:09.670 START TEST nvmf_target_multipath 00:14:09.670 ************************************ 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:09.670 * Looking for test storage... 00:14:09.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:09.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.670 --rc genhtml_branch_coverage=1 00:14:09.670 --rc genhtml_function_coverage=1 00:14:09.670 --rc genhtml_legend=1 00:14:09.670 --rc geninfo_all_blocks=1 00:14:09.670 --rc geninfo_unexecuted_blocks=1 00:14:09.670 00:14:09.670 ' 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:09.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.670 --rc genhtml_branch_coverage=1 00:14:09.670 --rc genhtml_function_coverage=1 00:14:09.670 --rc genhtml_legend=1 00:14:09.670 --rc geninfo_all_blocks=1 00:14:09.670 --rc geninfo_unexecuted_blocks=1 00:14:09.670 00:14:09.670 ' 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:09.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.670 --rc genhtml_branch_coverage=1 00:14:09.670 --rc genhtml_function_coverage=1 00:14:09.670 --rc genhtml_legend=1 00:14:09.670 --rc geninfo_all_blocks=1 00:14:09.670 --rc geninfo_unexecuted_blocks=1 00:14:09.670 00:14:09.670 ' 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:09.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.670 --rc genhtml_branch_coverage=1 00:14:09.670 --rc genhtml_function_coverage=1 00:14:09.670 --rc genhtml_legend=1 00:14:09.670 --rc geninfo_all_blocks=1 00:14:09.670 --rc geninfo_unexecuted_blocks=1 00:14:09.670 00:14:09.670 ' 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:14:09.670 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:09.671 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:09.671 Cannot find device "nvmf_init_br" 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:09.671 Cannot find device "nvmf_init_br2" 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:09.671 Cannot find device "nvmf_tgt_br" 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:14:09.671 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:09.933 Cannot find device "nvmf_tgt_br2" 00:14:09.933 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:14:09.933 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:09.933 Cannot find device "nvmf_init_br" 00:14:09.933 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:14:09.933 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:09.933 Cannot find device "nvmf_init_br2" 00:14:09.933 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:14:09.933 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:09.933 Cannot find device "nvmf_tgt_br" 00:14:09.933 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:14:09.933 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:09.933 Cannot find device "nvmf_tgt_br2" 00:14:09.933 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:14:09.933 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:09.933 Cannot find device "nvmf_br" 00:14:09.933 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:14:09.933 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:09.933 Cannot find device "nvmf_init_if" 00:14:09.933 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:14:09.933 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:09.933 Cannot find device "nvmf_init_if2" 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:09.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:09.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:09.934 14:40:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:09.934 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:09.934 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:09.934 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:09.934 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:09.934 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:09.934 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:09.934 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:09.934 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:09.934 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:09.934 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:10.210 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:10.210 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:10.211 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:10.211 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:14:10.211 00:14:10.211 --- 10.0.0.3 ping statistics --- 00:14:10.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.211 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:10.211 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:10.211 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:14:10.211 00:14:10.211 --- 10.0.0.4 ping statistics --- 00:14:10.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.211 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:10.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:14:10.211 00:14:10.211 --- 10.0.0.1 ping statistics --- 00:14:10.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.211 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:10.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:14:10.211 00:14:10.211 --- 10.0.0.2 ping statistics --- 00:14:10.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.211 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=63804 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 63804 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@833 -- # '[' -z 63804 ']' 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:10.211 14:40:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:10.211 [2024-11-04 14:40:19.164929] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:14:10.211 [2024-11-04 14:40:19.165001] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.211 [2024-11-04 14:40:19.306804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.211 [2024-11-04 14:40:19.343946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.211 [2024-11-04 14:40:19.343984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.211 [2024-11-04 14:40:19.343991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.211 [2024-11-04 14:40:19.343996] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.211 [2024-11-04 14:40:19.344001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.211 [2024-11-04 14:40:19.344849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.211 [2024-11-04 14:40:19.344934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.211 [2024-11-04 14:40:19.345285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.211 [2024-11-04 14:40:19.345311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.472 [2024-11-04 14:40:19.378074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:11.044 14:40:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:11.044 14:40:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@866 -- # return 0 00:14:11.044 14:40:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:11.044 14:40:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:11.044 14:40:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:11.044 14:40:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.044 14:40:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:11.304 [2024-11-04 14:40:20.261325] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.304 14:40:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:11.573 Malloc0 00:14:11.573 14:40:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:14:11.835 14:40:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:11.835 14:40:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:12.096 [2024-11-04 14:40:21.128595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:12.096 14:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:14:12.357 [2024-11-04 14:40:21.340755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:14:12.357 14:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid=0c7d476c-d4d7-4594-a48a-578d93697ffa -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:14:12.357 14:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid=0c7d476c-d4d7-4594-a48a-578d93697ffa -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:14:12.618 14:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:14:12.618 14:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # local i=0 00:14:12.618 14:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:12.618 14:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:12.618 14:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # sleep 2 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # return 0 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=63888 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:14:14.572 14:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:14:14.572 [global] 00:14:14.572 thread=1 00:14:14.572 invalidate=1 00:14:14.572 rw=randrw 00:14:14.572 time_based=1 00:14:14.572 runtime=6 00:14:14.572 ioengine=libaio 00:14:14.572 direct=1 00:14:14.572 bs=4096 00:14:14.572 iodepth=128 00:14:14.572 norandommap=0 00:14:14.572 numjobs=1 00:14:14.572 00:14:14.572 verify_dump=1 00:14:14.572 verify_backlog=512 00:14:14.572 verify_state_save=0 00:14:14.572 do_verify=1 00:14:14.572 verify=crc32c-intel 00:14:14.572 [job0] 00:14:14.572 filename=/dev/nvme0n1 00:14:14.572 Could not set queue depth (nvme0n1) 00:14:14.833 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:14.833 fio-3.35 00:14:14.833 Starting 1 thread 00:14:15.775 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:15.775 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:14:16.037 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:14:16.037 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:14:16.037 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:16.037 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:16.037 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:16.037 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:16.037 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:14:16.037 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:14:16.037 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:16.037 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:16.037 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:16.037 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:16.037 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:16.297 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:14:16.559 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:14:16.559 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:14:16.559 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:16.559 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:16.559 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:16.560 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:16.560 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:14:16.560 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:14:16.560 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:16.560 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:16.560 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:16.560 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:16.560 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 63888 00:14:21.858 00:14:21.858 job0: (groupid=0, jobs=1): err= 0: pid=63909: Mon Nov 4 14:40:29 2024 00:14:21.858 read: IOPS=11.3k, BW=44.2MiB/s (46.4MB/s)(266MiB/6007msec) 00:14:21.858 slat (usec): min=3, max=13254, avg=54.15, stdev=230.50 00:14:21.858 clat (usec): min=1734, max=21950, avg=7729.43, stdev=1496.81 00:14:21.858 lat (usec): min=1740, max=21971, avg=7783.58, stdev=1500.70 00:14:21.858 clat percentiles (usec): 00:14:21.858 | 1.00th=[ 4080], 5.00th=[ 5800], 10.00th=[ 6521], 20.00th=[ 6980], 00:14:21.858 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7701], 00:14:21.858 | 70.00th=[ 7898], 80.00th=[ 8225], 90.00th=[ 8979], 95.00th=[11207], 00:14:21.858 | 99.00th=[12387], 99.50th=[13566], 99.90th=[16909], 99.95th=[19006], 00:14:21.858 | 99.99th=[19268] 00:14:21.858 bw ( KiB/s): min=11656, max=29240, per=52.17%, avg=23622.00, stdev=5814.15, samples=12 00:14:21.858 iops : min= 2914, max= 7310, avg=5905.50, stdev=1453.54, samples=12 00:14:21.858 write: IOPS=6586, BW=25.7MiB/s (27.0MB/s)(139MiB/5384msec); 0 zone resets 00:14:21.858 slat (usec): min=5, max=2663, avg=59.24, stdev=166.69 00:14:21.858 clat (usec): min=1158, max=17425, avg=6668.44, stdev=1245.93 00:14:21.858 lat (usec): min=1176, max=17462, avg=6727.69, stdev=1250.36 00:14:21.858 clat percentiles (usec): 00:14:21.858 | 1.00th=[ 2999], 5.00th=[ 3851], 10.00th=[ 5211], 20.00th=[ 6194], 00:14:21.858 | 30.00th=[ 6456], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 6980], 00:14:21.858 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7635], 95.00th=[ 7963], 00:14:21.858 | 99.00th=[10552], 99.50th=[11076], 99.90th=[12256], 99.95th=[13173], 00:14:21.858 | 99.99th=[15664] 00:14:21.858 bw ( KiB/s): min=12160, max=28672, per=89.58%, avg=23601.33, stdev=5481.78, samples=12 00:14:21.858 iops : min= 3040, max= 7168, avg=5900.33, stdev=1370.44, samples=12 00:14:21.858 lat (msec) : 2=0.03%, 4=2.54%, 10=91.47%, 20=5.96%, 50=0.01% 00:14:21.858 cpu : usr=3.26%, sys=15.27%, ctx=6067, majf=0, minf=114 00:14:21.858 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:14:21.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:21.858 issued rwts: total=68003,35461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:21.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:21.858 00:14:21.858 Run status group 0 (all jobs): 00:14:21.858 READ: bw=44.2MiB/s (46.4MB/s), 44.2MiB/s-44.2MiB/s (46.4MB/s-46.4MB/s), io=266MiB (279MB), run=6007-6007msec 00:14:21.858 WRITE: bw=25.7MiB/s (27.0MB/s), 25.7MiB/s-25.7MiB/s (27.0MB/s-27.0MB/s), io=139MiB (145MB), run=5384-5384msec 00:14:21.858 00:14:21.858 Disk stats (read/write): 00:14:21.858 nvme0n1: ios=67055/34796, merge=0/0, ticks=502914/220882, in_queue=723796, util=98.60% 00:14:21.858 14:40:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:14:21.858 14:40:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:14:21.858 14:40:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:14:21.858 14:40:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:14:21.858 14:40:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:21.858 14:40:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:21.858 14:40:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:21.858 14:40:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:21.858 14:40:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:14:21.858 14:40:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:14:21.858 14:40:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:21.858 14:40:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:21.858 14:40:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:21.858 14:40:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:21.858 14:40:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:14:21.858 14:40:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=63995 00:14:21.858 14:40:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:14:21.858 14:40:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:14:21.858 [global] 00:14:21.858 thread=1 00:14:21.858 invalidate=1 00:14:21.858 rw=randrw 00:14:21.858 time_based=1 00:14:21.858 runtime=6 00:14:21.858 ioengine=libaio 00:14:21.858 direct=1 00:14:21.858 bs=4096 00:14:21.858 iodepth=128 00:14:21.858 norandommap=0 00:14:21.858 numjobs=1 00:14:21.858 00:14:21.858 verify_dump=1 00:14:21.858 verify_backlog=512 00:14:21.858 verify_state_save=0 00:14:21.858 do_verify=1 00:14:21.858 verify=crc32c-intel 00:14:21.858 [job0] 00:14:21.858 filename=/dev/nvme0n1 00:14:21.858 Could not set queue depth (nvme0n1) 00:14:21.858 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:21.858 fio-3.35 00:14:21.858 Starting 1 thread 00:14:22.429 14:40:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:22.688 14:40:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:14:22.688 14:40:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:14:22.688 14:40:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:14:22.688 14:40:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:22.688 14:40:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:22.688 14:40:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:22.688 14:40:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:22.688 14:40:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:14:22.688 14:40:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:14:22.688 14:40:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:22.688 14:40:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:22.688 14:40:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:22.688 14:40:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:22.688 14:40:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:22.948 14:40:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:14:23.210 14:40:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:14:23.210 14:40:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:14:23.210 14:40:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:23.210 14:40:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:23.210 14:40:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:23.210 14:40:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:23.210 14:40:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:14:23.210 14:40:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:14:23.210 14:40:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:23.210 14:40:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:23.210 14:40:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:23.210 14:40:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:23.210 14:40:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 63995 00:14:28.576 00:14:28.576 job0: (groupid=0, jobs=1): err= 0: pid=64016: Mon Nov 4 14:40:36 2024 00:14:28.576 read: IOPS=12.7k, BW=49.6MiB/s (52.0MB/s)(298MiB/6006msec) 00:14:28.576 slat (usec): min=2, max=8090, avg=40.92, stdev=212.57 00:14:28.576 clat (usec): min=289, max=18076, avg=6972.94, stdev=1859.10 00:14:28.576 lat (usec): min=301, max=18113, avg=7013.86, stdev=1871.45 00:14:28.576 clat percentiles (usec): 00:14:28.576 | 1.00th=[ 2057], 5.00th=[ 3589], 10.00th=[ 4490], 20.00th=[ 5669], 00:14:28.576 | 30.00th=[ 6456], 40.00th=[ 6915], 50.00th=[ 7242], 60.00th=[ 7373], 00:14:28.576 | 70.00th=[ 7701], 80.00th=[ 8029], 90.00th=[ 8586], 95.00th=[10159], 00:14:28.576 | 99.00th=[11863], 99.50th=[13042], 99.90th=[14746], 99.95th=[15795], 00:14:28.576 | 99.99th=[17171] 00:14:28.576 bw ( KiB/s): min=11560, max=39744, per=53.46%, avg=27149.09, stdev=8073.01, samples=11 00:14:28.576 iops : min= 2890, max= 9936, avg=6787.27, stdev=2018.25, samples=11 00:14:28.576 write: IOPS=7399, BW=28.9MiB/s (30.3MB/s)(150MiB/5186msec); 0 zone resets 00:14:28.576 slat (usec): min=8, max=3254, avg=48.95, stdev=147.94 00:14:28.576 clat (usec): min=753, max=17283, avg=6017.52, stdev=1694.25 00:14:28.576 lat (usec): min=776, max=17300, avg=6066.47, stdev=1707.93 00:14:28.576 clat percentiles (usec): 00:14:28.576 | 1.00th=[ 2008], 5.00th=[ 2835], 10.00th=[ 3458], 20.00th=[ 4424], 00:14:28.576 | 30.00th=[ 5604], 40.00th=[ 6194], 50.00th=[ 6456], 60.00th=[ 6718], 00:14:28.576 | 70.00th=[ 6915], 80.00th=[ 7177], 90.00th=[ 7439], 95.00th=[ 7832], 00:14:28.576 | 99.00th=[10552], 99.50th=[11731], 99.90th=[14484], 99.95th=[14877], 00:14:28.576 | 99.99th=[16581] 00:14:28.576 bw ( KiB/s): min=12216, max=40344, per=91.56%, avg=27099.64, stdev=7867.18, samples=11 00:14:28.576 iops : min= 3054, max=10086, avg=6774.91, stdev=1966.79, samples=11 00:14:28.576 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.06% 00:14:28.576 lat (msec) : 2=0.83%, 4=8.88%, 10=86.14%, 20=4.05% 00:14:28.576 cpu : usr=3.70%, sys=17.25%, ctx=6452, majf=0, minf=102 00:14:28.576 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:14:28.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:28.576 issued rwts: total=76246,38374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:28.576 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:28.576 00:14:28.576 Run status group 0 (all jobs): 00:14:28.576 READ: bw=49.6MiB/s (52.0MB/s), 49.6MiB/s-49.6MiB/s (52.0MB/s-52.0MB/s), io=298MiB (312MB), run=6006-6006msec 00:14:28.576 WRITE: bw=28.9MiB/s (30.3MB/s), 28.9MiB/s-28.9MiB/s (30.3MB/s-30.3MB/s), io=150MiB (157MB), run=5186-5186msec 00:14:28.576 00:14:28.576 Disk stats (read/write): 00:14:28.576 nvme0n1: ios=75149/37872, merge=0/0, ticks=503726/215385, in_queue=719111, util=98.55% 00:14:28.576 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:28.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:28.576 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:28.576 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1221 -- # local i=0 00:14:28.576 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:28.576 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.577 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:28.577 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.577 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1233 -- # return 0 00:14:28.577 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:28.577 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:14:28.577 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:14:28.577 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:14:28.577 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:14:28.577 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:28.577 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:14:28.577 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:28.577 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:14:28.577 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:28.577 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:28.577 rmmod nvme_tcp 00:14:28.577 rmmod nvme_fabrics 00:14:28.577 rmmod nvme_keyring 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 63804 ']' 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 63804 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' -z 63804 ']' 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # kill -0 63804 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # uname 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63804 00:14:28.577 killing process with pid 63804 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63804' 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@971 -- # kill 63804 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@976 -- # wait 63804 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:14:28.577 00:14:28.577 real 0m18.833s 00:14:28.577 user 1m10.630s 00:14:28.577 sys 0m7.312s 00:14:28.577 ************************************ 00:14:28.577 END TEST nvmf_target_multipath 00:14:28.577 ************************************ 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:28.577 ************************************ 00:14:28.577 START TEST nvmf_zcopy 00:14:28.577 ************************************ 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:28.577 * Looking for test storage... 00:14:28.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:28.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.577 --rc genhtml_branch_coverage=1 00:14:28.577 --rc genhtml_function_coverage=1 00:14:28.577 --rc genhtml_legend=1 00:14:28.577 --rc geninfo_all_blocks=1 00:14:28.577 --rc geninfo_unexecuted_blocks=1 00:14:28.577 00:14:28.577 ' 00:14:28.577 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:28.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.577 --rc genhtml_branch_coverage=1 00:14:28.577 --rc genhtml_function_coverage=1 00:14:28.577 --rc genhtml_legend=1 00:14:28.577 --rc geninfo_all_blocks=1 00:14:28.578 --rc geninfo_unexecuted_blocks=1 00:14:28.578 00:14:28.578 ' 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:28.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.578 --rc genhtml_branch_coverage=1 00:14:28.578 --rc genhtml_function_coverage=1 00:14:28.578 --rc genhtml_legend=1 00:14:28.578 --rc geninfo_all_blocks=1 00:14:28.578 --rc geninfo_unexecuted_blocks=1 00:14:28.578 00:14:28.578 ' 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:28.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.578 --rc genhtml_branch_coverage=1 00:14:28.578 --rc genhtml_function_coverage=1 00:14:28.578 --rc genhtml_legend=1 00:14:28.578 --rc geninfo_all_blocks=1 00:14:28.578 --rc geninfo_unexecuted_blocks=1 00:14:28.578 00:14:28.578 ' 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:28.578 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:28.578 Cannot find device "nvmf_init_br" 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:28.578 Cannot find device "nvmf_init_br2" 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:14:28.578 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:28.838 Cannot find device "nvmf_tgt_br" 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:28.838 Cannot find device "nvmf_tgt_br2" 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:28.838 Cannot find device "nvmf_init_br" 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:28.838 Cannot find device "nvmf_init_br2" 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:28.838 Cannot find device "nvmf_tgt_br" 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:28.838 Cannot find device "nvmf_tgt_br2" 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:28.838 Cannot find device "nvmf_br" 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:28.838 Cannot find device "nvmf_init_if" 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:28.838 Cannot find device "nvmf_init_if2" 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:28.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:28.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:28.838 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:29.135 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:29.135 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:29.135 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:29.135 14:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:29.135 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:29.136 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:29.136 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:14:29.136 00:14:29.136 --- 10.0.0.3 ping statistics --- 00:14:29.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.136 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:29.136 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:29.136 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:14:29.136 00:14:29.136 --- 10.0.0.4 ping statistics --- 00:14:29.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.136 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:29.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:29.136 00:14:29.136 --- 10.0.0.1 ping statistics --- 00:14:29.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.136 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:29.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:14:29.136 00:14:29.136 --- 10.0.0.2 ping statistics --- 00:14:29.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.136 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=64313 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 64313 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 64313 ']' 00:14:29.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:29.136 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:29.136 [2024-11-04 14:40:38.098988] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:14:29.136 [2024-11-04 14:40:38.099059] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.136 [2024-11-04 14:40:38.237823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.396 [2024-11-04 14:40:38.285979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.396 [2024-11-04 14:40:38.286024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.396 [2024-11-04 14:40:38.286032] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.396 [2024-11-04 14:40:38.286037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.396 [2024-11-04 14:40:38.286042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.396 [2024-11-04 14:40:38.286299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.396 [2024-11-04 14:40:38.318849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:29.969 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:29.969 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:14:29.969 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:29.969 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:29.969 14:40:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:29.969 [2024-11-04 14:40:39.048832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:29.969 [2024-11-04 14:40:39.064886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:29.969 malloc0 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:29.969 { 00:14:29.969 "params": { 00:14:29.969 "name": "Nvme$subsystem", 00:14:29.969 "trtype": "$TEST_TRANSPORT", 00:14:29.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:29.969 "adrfam": "ipv4", 00:14:29.969 "trsvcid": "$NVMF_PORT", 00:14:29.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:29.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:29.969 "hdgst": ${hdgst:-false}, 00:14:29.969 "ddgst": ${ddgst:-false} 00:14:29.969 }, 00:14:29.969 "method": "bdev_nvme_attach_controller" 00:14:29.969 } 00:14:29.969 EOF 00:14:29.969 )") 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:14:29.969 14:40:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:29.969 "params": { 00:14:29.969 "name": "Nvme1", 00:14:29.969 "trtype": "tcp", 00:14:29.969 "traddr": "10.0.0.3", 00:14:29.969 "adrfam": "ipv4", 00:14:29.969 "trsvcid": "4420", 00:14:29.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:29.969 "hdgst": false, 00:14:29.969 "ddgst": false 00:14:29.969 }, 00:14:29.969 "method": "bdev_nvme_attach_controller" 00:14:29.969 }' 00:14:30.231 [2024-11-04 14:40:39.134597] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:14:30.231 [2024-11-04 14:40:39.134691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64346 ] 00:14:30.231 [2024-11-04 14:40:39.274353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.231 [2024-11-04 14:40:39.310411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.231 [2024-11-04 14:40:39.350406] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:30.492 Running I/O for 10 seconds... 00:14:32.376 6759.00 IOPS, 52.80 MiB/s [2024-11-04T14:40:42.899Z] 6825.50 IOPS, 53.32 MiB/s [2024-11-04T14:40:43.468Z] 6829.00 IOPS, 53.35 MiB/s [2024-11-04T14:40:44.853Z] 6815.00 IOPS, 53.24 MiB/s [2024-11-04T14:40:45.796Z] 6806.60 IOPS, 53.18 MiB/s [2024-11-04T14:40:46.737Z] 6808.00 IOPS, 53.19 MiB/s [2024-11-04T14:40:47.676Z] 6806.00 IOPS, 53.17 MiB/s [2024-11-04T14:40:48.636Z] 6795.12 IOPS, 53.09 MiB/s [2024-11-04T14:40:49.582Z] 6800.33 IOPS, 53.13 MiB/s [2024-11-04T14:40:49.582Z] 6805.00 IOPS, 53.16 MiB/s 00:14:40.442 Latency(us) 00:14:40.442 [2024-11-04T14:40:49.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.442 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:40.442 Verification LBA range: start 0x0 length 0x1000 00:14:40.442 Nvme1n1 : 10.01 6807.55 53.18 0.00 0.00 18748.76 2659.25 28432.54 00:14:40.442 [2024-11-04T14:40:49.582Z] =================================================================================================================== 00:14:40.442 [2024-11-04T14:40:49.582Z] Total : 6807.55 53.18 0.00 0.00 18748.76 2659.25 28432.54 00:14:40.703 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=64468 00:14:40.703 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:40.703 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:40.703 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:40.703 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:40.703 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:14:40.703 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:14:40.703 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:40.703 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:40.703 { 00:14:40.703 "params": { 00:14:40.703 "name": "Nvme$subsystem", 00:14:40.703 "trtype": "$TEST_TRANSPORT", 00:14:40.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:40.703 "adrfam": "ipv4", 00:14:40.703 "trsvcid": "$NVMF_PORT", 00:14:40.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:40.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:40.703 "hdgst": ${hdgst:-false}, 00:14:40.703 "ddgst": ${ddgst:-false} 00:14:40.703 }, 00:14:40.703 "method": "bdev_nvme_attach_controller" 00:14:40.703 } 00:14:40.703 EOF 00:14:40.703 )") 00:14:40.703 [2024-11-04 14:40:49.594686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.594721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:14:40.703 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:14:40.703 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:14:40.703 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:40.703 "params": { 00:14:40.703 "name": "Nvme1", 00:14:40.703 "trtype": "tcp", 00:14:40.703 "traddr": "10.0.0.3", 00:14:40.703 "adrfam": "ipv4", 00:14:40.703 "trsvcid": "4420", 00:14:40.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:40.703 "hdgst": false, 00:14:40.703 "ddgst": false 00:14:40.703 }, 00:14:40.703 "method": "bdev_nvme_attach_controller" 00:14:40.703 }' 00:14:40.703 [2024-11-04 14:40:49.602658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.602677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 [2024-11-04 14:40:49.610658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.610678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 [2024-11-04 14:40:49.618655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.618677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 [2024-11-04 14:40:49.626515] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:14:40.703 [2024-11-04 14:40:49.626574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64468 ] 00:14:40.703 [2024-11-04 14:40:49.626655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.626664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 [2024-11-04 14:40:49.638661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.638681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 [2024-11-04 14:40:49.646664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.646687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 [2024-11-04 14:40:49.654662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.654680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 [2024-11-04 14:40:49.662663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.662681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 [2024-11-04 14:40:49.670664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.670680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 [2024-11-04 14:40:49.678666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.678682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 [2024-11-04 14:40:49.686667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.686683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 [2024-11-04 14:40:49.694670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.694691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 [2024-11-04 14:40:49.702672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.702689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 [2024-11-04 14:40:49.710675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.710692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 [2024-11-04 14:40:49.718678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.718695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 [2024-11-04 14:40:49.726680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.726697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 [2024-11-04 14:40:49.734683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.734701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 [2024-11-04 14:40:49.742684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.742703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 [2024-11-04 14:40:49.750686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.703 [2024-11-04 14:40:49.750703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.703 [2024-11-04 14:40:49.758688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.704 [2024-11-04 14:40:49.758705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.704 [2024-11-04 14:40:49.765850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.704 [2024-11-04 14:40:49.766691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.704 [2024-11-04 14:40:49.766706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.704 [2024-11-04 14:40:49.774693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.704 [2024-11-04 14:40:49.774716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.704 [2024-11-04 14:40:49.782695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.704 [2024-11-04 14:40:49.782719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.704 [2024-11-04 14:40:49.790697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.704 [2024-11-04 14:40:49.790715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.704 [2024-11-04 14:40:49.798698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.704 [2024-11-04 14:40:49.798716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.704 [2024-11-04 14:40:49.804085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.704 [2024-11-04 14:40:49.806700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.704 [2024-11-04 14:40:49.806717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.704 [2024-11-04 14:40:49.814707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.704 [2024-11-04 14:40:49.814730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.704 [2024-11-04 14:40:49.826715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.704 [2024-11-04 14:40:49.826736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.704 [2024-11-04 14:40:49.834711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.704 [2024-11-04 14:40:49.834734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.970 [2024-11-04 14:40:49.842714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.970 [2024-11-04 14:40:49.842736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.970 [2024-11-04 14:40:49.845568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:40.970 [2024-11-04 14:40:49.850717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.970 [2024-11-04 14:40:49.850741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.970 [2024-11-04 14:40:49.858715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.970 [2024-11-04 14:40:49.858734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.970 [2024-11-04 14:40:49.866717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.970 [2024-11-04 14:40:49.866735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.970 [2024-11-04 14:40:49.874720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:49.874738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 [2024-11-04 14:40:49.882857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:49.882882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 [2024-11-04 14:40:49.890857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:49.890879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 [2024-11-04 14:40:49.902865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:49.902889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 [2024-11-04 14:40:49.910868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:49.910889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 [2024-11-04 14:40:49.918872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:49.918892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 [2024-11-04 14:40:49.930885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:49.930909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 [2024-11-04 14:40:49.942899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:49.942926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 [2024-11-04 14:40:49.950897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:49.950915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 Running I/O for 5 seconds... 00:14:40.971 [2024-11-04 14:40:49.958928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:49.958954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 [2024-11-04 14:40:49.972005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:49.972036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 [2024-11-04 14:40:49.983564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:49.983591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 [2024-11-04 14:40:49.991372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:49.991399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 [2024-11-04 14:40:50.002784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:50.002812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 [2024-11-04 14:40:50.017446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:50.017476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 [2024-11-04 14:40:50.025906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:50.025933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 [2024-11-04 14:40:50.042411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:50.042440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 [2024-11-04 14:40:50.058989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:50.059019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 [2024-11-04 14:40:50.076853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:50.076882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 [2024-11-04 14:40:50.086243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:50.086270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.971 [2024-11-04 14:40:50.099894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.971 [2024-11-04 14:40:50.099935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.114792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.114820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.123383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.123409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.133193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.133219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.146868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.146894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.155293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.155318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.166942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.166967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.175897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.175922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.185853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.185878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.195042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.195071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.208806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.208831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.217202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.217229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.228896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.228921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.237827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.237852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.247300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.247324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.260873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.260899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.277547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.277573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.292975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.293002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.304208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.304233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.312166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.312193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.323919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.323945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.331918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.331943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.342129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.342157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.354076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.354102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.232 [2024-11-04 14:40:50.370843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.232 [2024-11-04 14:40:50.370869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.386322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.386348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.403766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.403793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.414929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.414957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.423427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.423454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.434477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.434506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.445834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.445862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.453791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.453816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.465458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.465483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.473950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.473975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.486521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.486546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.495624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.495647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.507265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.507291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.522326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.522351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.533447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.533474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.549070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.549099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.566706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.566738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.582188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.582215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.590508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.590534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.602249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.602274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.613729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.613753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.622164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.622191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.493 [2024-11-04 14:40:50.632175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.493 [2024-11-04 14:40:50.632202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.753 [2024-11-04 14:40:50.641691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.753 [2024-11-04 14:40:50.641716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.753 [2024-11-04 14:40:50.651126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.753 [2024-11-04 14:40:50.651152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.753 [2024-11-04 14:40:50.660503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.753 [2024-11-04 14:40:50.660529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.753 [2024-11-04 14:40:50.669659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.753 [2024-11-04 14:40:50.669683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.753 [2024-11-04 14:40:50.678866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.753 [2024-11-04 14:40:50.678891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.753 [2024-11-04 14:40:50.688235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.753 [2024-11-04 14:40:50.688260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.753 [2024-11-04 14:40:50.697546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.753 [2024-11-04 14:40:50.697571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.753 [2024-11-04 14:40:50.707182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.753 [2024-11-04 14:40:50.707208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.753 [2024-11-04 14:40:50.716538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.753 [2024-11-04 14:40:50.716564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.753 [2024-11-04 14:40:50.726007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.753 [2024-11-04 14:40:50.726032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.753 [2024-11-04 14:40:50.735536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.753 [2024-11-04 14:40:50.735560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.753 [2024-11-04 14:40:50.744749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.753 [2024-11-04 14:40:50.744774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.753 [2024-11-04 14:40:50.754000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.753 [2024-11-04 14:40:50.754025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.753 [2024-11-04 14:40:50.763426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.753 [2024-11-04 14:40:50.763452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.754 [2024-11-04 14:40:50.772718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.754 [2024-11-04 14:40:50.772753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.754 [2024-11-04 14:40:50.782122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.754 [2024-11-04 14:40:50.782149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.754 [2024-11-04 14:40:50.791447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.754 [2024-11-04 14:40:50.791473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.754 [2024-11-04 14:40:50.801073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.754 [2024-11-04 14:40:50.801097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.754 [2024-11-04 14:40:50.814443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.754 [2024-11-04 14:40:50.814468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.754 [2024-11-04 14:40:50.830025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.754 [2024-11-04 14:40:50.830051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.754 [2024-11-04 14:40:50.841265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.754 [2024-11-04 14:40:50.841290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.754 [2024-11-04 14:40:50.849738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.754 [2024-11-04 14:40:50.849762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.754 [2024-11-04 14:40:50.861466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.754 [2024-11-04 14:40:50.861490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.754 [2024-11-04 14:40:50.872909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.754 [2024-11-04 14:40:50.872935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.754 [2024-11-04 14:40:50.889145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.754 [2024-11-04 14:40:50.889175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.014 [2024-11-04 14:40:50.900646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.014 [2024-11-04 14:40:50.900703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.014 [2024-11-04 14:40:50.917271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.014 [2024-11-04 14:40:50.917298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.014 [2024-11-04 14:40:50.932658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.015 [2024-11-04 14:40:50.932685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.015 [2024-11-04 14:40:50.944090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.015 [2024-11-04 14:40:50.944116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.015 [2024-11-04 14:40:50.952185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.015 [2024-11-04 14:40:50.952210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.015 13421.00 IOPS, 104.85 MiB/s [2024-11-04T14:40:51.155Z] [2024-11-04 14:40:50.967617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.015 [2024-11-04 14:40:50.967639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.015 [2024-11-04 14:40:50.976257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.015 [2024-11-04 14:40:50.976282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.015 [2024-11-04 14:40:50.986162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.015 [2024-11-04 14:40:50.986187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.015 [2024-11-04 14:40:50.995491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.015 [2024-11-04 14:40:50.995519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.015 [2024-11-04 14:40:51.009134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.015 [2024-11-04 14:40:51.009158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.015 [2024-11-04 14:40:51.025737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.015 [2024-11-04 14:40:51.025764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.015 [2024-11-04 14:40:51.041064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.015 [2024-11-04 14:40:51.041089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.015 [2024-11-04 14:40:51.052299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.015 [2024-11-04 14:40:51.052324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.015 [2024-11-04 14:40:51.060286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.015 [2024-11-04 14:40:51.060311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.015 [2024-11-04 14:40:51.076791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.015 [2024-11-04 14:40:51.076815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.015 [2024-11-04 14:40:51.088000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.015 [2024-11-04 14:40:51.088026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.015 [2024-11-04 14:40:51.104200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.015 [2024-11-04 14:40:51.104228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.015 [2024-11-04 14:40:51.120494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.015 [2024-11-04 14:40:51.120521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.015 [2024-11-04 14:40:51.131590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.015 [2024-11-04 14:40:51.131625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.015 [2024-11-04 14:40:51.147853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.015 [2024-11-04 14:40:51.147879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.158852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.158879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.175090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.175117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.186395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.186422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.202703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.202728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.220273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.220301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.237652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.237675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.248658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.248685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.257012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.257037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.268806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.268957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.285564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.285591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.296700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.296734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.313079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.313104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.329886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.329913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.340859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.340886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.348977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.349004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.363402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.363430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.372092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.372118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.381952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.381978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.391386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.391412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.274 [2024-11-04 14:40:51.405045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.274 [2024-11-04 14:40:51.405070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.536 [2024-11-04 14:40:51.421709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.536 [2024-11-04 14:40:51.421733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.536 [2024-11-04 14:40:51.433543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.536 [2024-11-04 14:40:51.433569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.536 [2024-11-04 14:40:51.449687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.536 [2024-11-04 14:40:51.449715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.536 [2024-11-04 14:40:51.467100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.536 [2024-11-04 14:40:51.467128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.536 [2024-11-04 14:40:51.482587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.536 [2024-11-04 14:40:51.482627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.536 [2024-11-04 14:40:51.500455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.536 [2024-11-04 14:40:51.500486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.536 [2024-11-04 14:40:51.516664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.536 [2024-11-04 14:40:51.516697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.536 [2024-11-04 14:40:51.527855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.536 [2024-11-04 14:40:51.527884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.536 [2024-11-04 14:40:51.544205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.536 [2024-11-04 14:40:51.544233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.536 [2024-11-04 14:40:51.562018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.536 [2024-11-04 14:40:51.562053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.536 [2024-11-04 14:40:51.573550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.536 [2024-11-04 14:40:51.573578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.536 [2024-11-04 14:40:51.589981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.536 [2024-11-04 14:40:51.590010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.536 [2024-11-04 14:40:51.601294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.536 [2024-11-04 14:40:51.601322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.536 [2024-11-04 14:40:51.609720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.536 [2024-11-04 14:40:51.609744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.536 [2024-11-04 14:40:51.625635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.536 [2024-11-04 14:40:51.625662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.536 [2024-11-04 14:40:51.634145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.536 [2024-11-04 14:40:51.634171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.536 [2024-11-04 14:40:51.650599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.536 [2024-11-04 14:40:51.650634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.536 [2024-11-04 14:40:51.659279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.536 [2024-11-04 14:40:51.659304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.536 [2024-11-04 14:40:51.669257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.536 [2024-11-04 14:40:51.669284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.678629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.678655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.688013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.688043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.697745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.697773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.707435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.707465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.717266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.717295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.731221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.731253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.739952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.739982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.754825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.754859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.766235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.766266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.774472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.774505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.784754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.784783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.794111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.794144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.803440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.803474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.812704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.812742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.822189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.822221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.835765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.835797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.852342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.852389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.868021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.868065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.885928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.885961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.901412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.901446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.913116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.913146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.797 [2024-11-04 14:40:51.921003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.797 [2024-11-04 14:40:51.921031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.798 [2024-11-04 14:40:51.933112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.798 [2024-11-04 14:40:51.933144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 [2024-11-04 14:40:51.944888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:51.944917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 [2024-11-04 14:40:51.953258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:51.953282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 13439.00 IOPS, 104.99 MiB/s [2024-11-04T14:40:52.206Z] [2024-11-04 14:40:51.969925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:51.969951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 [2024-11-04 14:40:51.981361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:51.981387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 [2024-11-04 14:40:51.989934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:51.989958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 [2024-11-04 14:40:52.004903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:52.004929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 [2024-11-04 14:40:52.013148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:52.013173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 [2024-11-04 14:40:52.024737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:52.024764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 [2024-11-04 14:40:52.033734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:52.033758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 [2024-11-04 14:40:52.045288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:52.045312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 [2024-11-04 14:40:52.060862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:52.060886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 [2024-11-04 14:40:52.077735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:52.077768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 [2024-11-04 14:40:52.093713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:52.093745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 [2024-11-04 14:40:52.110457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:52.110490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 [2024-11-04 14:40:52.121678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:52.121711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 [2024-11-04 14:40:52.137860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:52.137893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 [2024-11-04 14:40:52.148674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:52.148720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 [2024-11-04 14:40:52.164878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:52.164908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 [2024-11-04 14:40:52.176085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:52.176117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.066 [2024-11-04 14:40:52.192515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.066 [2024-11-04 14:40:52.192545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.203929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.203956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.219012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.219039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.236994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.237023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.252552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.252581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.263689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.263714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.271551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.271577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.283516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.283542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.291839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.291864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.301138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.301162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.317719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.317744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.334792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.334821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.350271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.350298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.368354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.368380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.386134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.386161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.401668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.401695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.412453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.412479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.420448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.420475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.436220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.436250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.444900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.444925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.330 [2024-11-04 14:40:52.459111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.330 [2024-11-04 14:40:52.459137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.593 [2024-11-04 14:40:52.475963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.593 [2024-11-04 14:40:52.475996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.593 [2024-11-04 14:40:52.491839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.593 [2024-11-04 14:40:52.491884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.593 [2024-11-04 14:40:52.508485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.593 [2024-11-04 14:40:52.508517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.593 [2024-11-04 14:40:52.526226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.593 [2024-11-04 14:40:52.526255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.593 [2024-11-04 14:40:52.537050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.593 [2024-11-04 14:40:52.537076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.593 [2024-11-04 14:40:52.545404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.593 [2024-11-04 14:40:52.545431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.593 [2024-11-04 14:40:52.555646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.593 [2024-11-04 14:40:52.555674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.593 [2024-11-04 14:40:52.570236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.593 [2024-11-04 14:40:52.570265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.593 [2024-11-04 14:40:52.578623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.593 [2024-11-04 14:40:52.578648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.593 [2024-11-04 14:40:52.590139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.593 [2024-11-04 14:40:52.590165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.593 [2024-11-04 14:40:52.598792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.593 [2024-11-04 14:40:52.598816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.593 [2024-11-04 14:40:52.608690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.593 [2024-11-04 14:40:52.608717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.593 [2024-11-04 14:40:52.618010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.593 [2024-11-04 14:40:52.618036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.593 [2024-11-04 14:40:52.627332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.593 [2024-11-04 14:40:52.627357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.593 [2024-11-04 14:40:52.636999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.593 [2024-11-04 14:40:52.637024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.593 [2024-11-04 14:40:52.646596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.593 [2024-11-04 14:40:52.646629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.593 [2024-11-04 14:40:52.660359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.594 [2024-11-04 14:40:52.660384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.594 [2024-11-04 14:40:52.669280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.594 [2024-11-04 14:40:52.669305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.594 [2024-11-04 14:40:52.678776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.594 [2024-11-04 14:40:52.678802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.594 [2024-11-04 14:40:52.688182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.594 [2024-11-04 14:40:52.688206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.594 [2024-11-04 14:40:52.697311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.594 [2024-11-04 14:40:52.697341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.594 [2024-11-04 14:40:52.706532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.594 [2024-11-04 14:40:52.706559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.594 [2024-11-04 14:40:52.715895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.594 [2024-11-04 14:40:52.715921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.594 [2024-11-04 14:40:52.725213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.594 [2024-11-04 14:40:52.725240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.734757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.734783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.744030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.744057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.753397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.753421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.762805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.762829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.772090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.772115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.781438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.781462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.790874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.790900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.800125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.800150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.809537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.809562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.818892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.818918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.832636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.832662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.849623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.849651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.866156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.866183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.877593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.877630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.894724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.894752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.910546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.910573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.928228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.928259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.937545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.937572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.947151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.947179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 13468.00 IOPS, 105.22 MiB/s [2024-11-04T14:40:52.997Z] [2024-11-04 14:40:52.956640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.956670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.966024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.966052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.975599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.975635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.857 [2024-11-04 14:40:52.989320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.857 [2024-11-04 14:40:52.989352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.121 [2024-11-04 14:40:52.997888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.121 [2024-11-04 14:40:52.997918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.121 [2024-11-04 14:40:53.012483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.012518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.020646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.020673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.032137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.032164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.041474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.041501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.051006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.051032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.060361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.060390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.069774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.069800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.079000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.079026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.088425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.088454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.097925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.097952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.107276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.107303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.120886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.120915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.129475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.129504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.143701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.143731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.160332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.160363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.177876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.177902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.193468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.193495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.204719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.204757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.220083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.220111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.231241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.231270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.239583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.239623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.122 [2024-11-04 14:40:53.251055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.122 [2024-11-04 14:40:53.251083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.262160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.262190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.270368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.270396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.286283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.286312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.294427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.294456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.304790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.304818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.316709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.316749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.325422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.325451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.335404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.335430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.344870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.344898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.354187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.354215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.363442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.363468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.372821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.372846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.382351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.382375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.391782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.391807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.401565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.401591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.411060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.411089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.420340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.420367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.430269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.430299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.440008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.440039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.449594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.449633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.459074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.459101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.468425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.468450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.477931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.477952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.487571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.487601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.497322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.497349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.506944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.506971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.383 [2024-11-04 14:40:53.516535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.383 [2024-11-04 14:40:53.516561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.526216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.526244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.535677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.535704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.545248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.545277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.554622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.554648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.564284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.564312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.574063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.574092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.583562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.583589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.592838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.592863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.602150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.602176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.611563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.611590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.621283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.621320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.630490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.630516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.639958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.639984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.649336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.649362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.659068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.659095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.668531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.668558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.678160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.678190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.687933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.687961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.697514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.697541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.706857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.706884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.716144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.716170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.725500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.725525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.734821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.734847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.744237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.744263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.753505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.753538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.763113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.763140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.644 [2024-11-04 14:40:53.772426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.644 [2024-11-04 14:40:53.772452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.645 [2024-11-04 14:40:53.781753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.645 [2024-11-04 14:40:53.781779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.791466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.791492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.801031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.801057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.810436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.810462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.820016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.820042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.829506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.829534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.838896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.838921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.848374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.848400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.857616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.857641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.866956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.866980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.876014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.876038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.885234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.885259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.894686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.894710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.904121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.904146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.913527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.913553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.923009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.923034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.932289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.932314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.941599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.941632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.951044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.951072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 13460.25 IOPS, 105.16 MiB/s [2024-11-04T14:40:54.046Z] [2024-11-04 14:40:53.960674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.960710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.970188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.970215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.979511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.979537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.989117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.989141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:53.998665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:53.998690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:54.008179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:54.008205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:54.017657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:54.017683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:54.026931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:54.026957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.906 [2024-11-04 14:40:54.036390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.906 [2024-11-04 14:40:54.036416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.045943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.045968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.055520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.055547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.065077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.065101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.074403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.074429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.083598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.083631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.092911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.092936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.102069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.102094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.111482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.111508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.120659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.120685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.129963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.129990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.139323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.139350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.148597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.148630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.158849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.158882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.168491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.168522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.180318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.180350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.188631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.188661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.199349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.199379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.209131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.209163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.218913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.218953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.228578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.228616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.238165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.238193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.247531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.247558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.256830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.256854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.266193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.266218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.275760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.275789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.289482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.289512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.166 [2024-11-04 14:40:54.298341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.166 [2024-11-04 14:40:54.298380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.310122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.310151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.321795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.321835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.330342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.330370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.345113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.345145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.353497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.353523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.369909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.369940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.381223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.381250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.397272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.397303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.413000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.413029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.424675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.424701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.440831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.440857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.458674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.458701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.474333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.474363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.485376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.485404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.493401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.493428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.505369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.505400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.514636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.514663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.526635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.526663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.535375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.535403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.548248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.548281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.428 [2024-11-04 14:40:54.559908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.428 [2024-11-04 14:40:54.559940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.568637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.568669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.579488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.579519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.588915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.588946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.598366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.598393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.607830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.607855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.617478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.617505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.627245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.627273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.641005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.641032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.656428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.656462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.667341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.667370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.675733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.675771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.687440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.687467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.698549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.698576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.715514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.715545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.731164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.731196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.742338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.742366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.750239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.750266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.766365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.766394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.774791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.774819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.786558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.786588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.804431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.804460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.688 [2024-11-04 14:40:54.815334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.688 [2024-11-04 14:40:54.815360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:54.831396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:54.831424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:54.846747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:54.846776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:54.857759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:54.857785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:54.865626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:54.865651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:54.877457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:54.877483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:54.886539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:54.886568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:54.903326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:54.903355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:54.914442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:54.914469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:54.922440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:54.922465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:54.934513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:54.934539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:54.952197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:54.952225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 13433.20 IOPS, 104.95 MiB/s [2024-11-04T14:40:55.089Z] [2024-11-04 14:40:54.960464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:54.960489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 00:14:45.949 Latency(us) 00:14:45.949 [2024-11-04T14:40:55.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.949 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:45.949 Nvme1n1 : 5.01 13435.48 104.96 0.00 0.00 9517.34 3453.24 18249.26 00:14:45.949 [2024-11-04T14:40:55.089Z] =================================================================================================================== 00:14:45.949 [2024-11-04T14:40:55.089Z] Total : 13435.48 104.96 0.00 0.00 9517.34 3453.24 18249.26 00:14:45.949 [2024-11-04 14:40:54.968039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:54.968064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:54.976036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:54.976060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:54.984037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:54.984056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:54.992039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:54.992059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:55.000042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:55.000062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:55.008042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:55.008064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:55.016043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:55.016063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:55.024045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:55.024065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:55.040050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:55.040071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:55.048071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:55.048100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:55.056057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:55.056077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:55.064058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:55.064078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.949 [2024-11-04 14:40:55.072058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.949 [2024-11-04 14:40:55.072076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.950 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (64468) - No such process 00:14:45.950 14:40:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 64468 00:14:45.950 14:40:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.950 14:40:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.950 14:40:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:46.209 14:40:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.209 14:40:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:46.209 14:40:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.209 14:40:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:46.209 delay0 00:14:46.209 14:40:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.209 14:40:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:46.209 14:40:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.209 14:40:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:46.209 14:40:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.209 14:40:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:14:46.209 [2024-11-04 14:40:55.273084] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:52.790 Initializing NVMe Controllers 00:14:52.790 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:52.790 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:52.790 Initialization complete. Launching workers. 00:14:52.790 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 156 00:14:52.790 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 443, failed to submit 33 00:14:52.790 success 344, unsuccessful 99, failed 0 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:52.790 rmmod nvme_tcp 00:14:52.790 rmmod nvme_fabrics 00:14:52.790 rmmod nvme_keyring 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 64313 ']' 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 64313 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 64313 ']' 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 64313 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64313 00:14:52.790 killing process with pid 64313 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64313' 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 64313 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 64313 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:52.790 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:14:52.791 00:14:52.791 real 0m24.284s 00:14:52.791 user 0m41.226s 00:14:52.791 sys 0m5.169s 00:14:52.791 ************************************ 00:14:52.791 END TEST nvmf_zcopy 00:14:52.791 ************************************ 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:52.791 ************************************ 00:14:52.791 START TEST nvmf_nmic 00:14:52.791 ************************************ 00:14:52.791 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:53.054 * Looking for test storage... 00:14:53.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:53.054 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:53.054 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:53.054 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:14:53.054 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:53.054 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:53.054 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:53.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.054 --rc genhtml_branch_coverage=1 00:14:53.054 --rc genhtml_function_coverage=1 00:14:53.054 --rc genhtml_legend=1 00:14:53.054 --rc geninfo_all_blocks=1 00:14:53.054 --rc geninfo_unexecuted_blocks=1 00:14:53.054 00:14:53.054 ' 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:53.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.054 --rc genhtml_branch_coverage=1 00:14:53.054 --rc genhtml_function_coverage=1 00:14:53.054 --rc genhtml_legend=1 00:14:53.054 --rc geninfo_all_blocks=1 00:14:53.054 --rc geninfo_unexecuted_blocks=1 00:14:53.054 00:14:53.054 ' 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:53.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.054 --rc genhtml_branch_coverage=1 00:14:53.054 --rc genhtml_function_coverage=1 00:14:53.054 --rc genhtml_legend=1 00:14:53.054 --rc geninfo_all_blocks=1 00:14:53.054 --rc geninfo_unexecuted_blocks=1 00:14:53.054 00:14:53.054 ' 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:53.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.054 --rc genhtml_branch_coverage=1 00:14:53.054 --rc genhtml_function_coverage=1 00:14:53.054 --rc genhtml_legend=1 00:14:53.054 --rc geninfo_all_blocks=1 00:14:53.054 --rc geninfo_unexecuted_blocks=1 00:14:53.054 00:14:53.054 ' 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.054 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:53.055 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:53.055 Cannot find device "nvmf_init_br" 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:53.055 Cannot find device "nvmf_init_br2" 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:53.055 Cannot find device "nvmf_tgt_br" 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:53.055 Cannot find device "nvmf_tgt_br2" 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:53.055 Cannot find device "nvmf_init_br" 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:53.055 Cannot find device "nvmf_init_br2" 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:53.055 Cannot find device "nvmf_tgt_br" 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:53.055 Cannot find device "nvmf_tgt_br2" 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:53.055 Cannot find device "nvmf_br" 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:53.055 Cannot find device "nvmf_init_if" 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:53.055 Cannot find device "nvmf_init_if2" 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:53.055 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:53.055 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:53.055 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:53.317 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:53.317 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:14:53.317 00:14:53.317 --- 10.0.0.3 ping statistics --- 00:14:53.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.317 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:53.317 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:53.317 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:14:53.317 00:14:53.317 --- 10.0.0.4 ping statistics --- 00:14:53.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.317 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:53.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:14:53.317 00:14:53.317 --- 10.0.0.1 ping statistics --- 00:14:53.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.317 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:53.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:14:53.317 00:14:53.317 --- 10.0.0.2 ping statistics --- 00:14:53.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.317 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=64836 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 64836 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 64836 ']' 00:14:53.317 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:53.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.318 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.318 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:53.318 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.318 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:53.318 14:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:53.318 [2024-11-04 14:41:02.430459] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:14:53.318 [2024-11-04 14:41:02.430517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.579 [2024-11-04 14:41:02.573714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.579 [2024-11-04 14:41:02.620685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.579 [2024-11-04 14:41:02.620739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.579 [2024-11-04 14:41:02.620746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.579 [2024-11-04 14:41:02.620752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.579 [2024-11-04 14:41:02.620757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.579 [2024-11-04 14:41:02.621787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.579 [2024-11-04 14:41:02.622194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.579 [2024-11-04 14:41:02.622859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.579 [2024-11-04 14:41:02.622979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.579 [2024-11-04 14:41:02.667155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.530 [2024-11-04 14:41:03.366233] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.530 Malloc0 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.530 [2024-11-04 14:41:03.424595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:54.530 test case1: single bdev can't be used in multiple subsystems 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.530 [2024-11-04 14:41:03.448470] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:54.530 [2024-11-04 14:41:03.448820] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:54.530 [2024-11-04 14:41:03.448900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.530 request: 00:14:54.530 { 00:14:54.530 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:54.530 "namespace": { 00:14:54.530 "bdev_name": "Malloc0", 00:14:54.530 "no_auto_visible": false 00:14:54.530 }, 00:14:54.530 "method": "nvmf_subsystem_add_ns", 00:14:54.530 "req_id": 1 00:14:54.530 } 00:14:54.530 Got JSON-RPC error response 00:14:54.530 response: 00:14:54.530 { 00:14:54.530 "code": -32602, 00:14:54.530 "message": "Invalid parameters" 00:14:54.530 } 00:14:54.530 Adding namespace failed - expected result. 00:14:54.530 test case2: host connect to nvmf target in multiple paths 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.530 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.531 [2024-11-04 14:41:03.460588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:54.531 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.531 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid=0c7d476c-d4d7-4594-a48a-578d93697ffa -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:54.531 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid=0c7d476c-d4d7-4594-a48a-578d93697ffa -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:14:54.791 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:54.791 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:14:54.791 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:54.791 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:54.791 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:14:56.705 14:41:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:56.705 14:41:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:56.705 14:41:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:56.705 14:41:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:56.705 14:41:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:56.705 14:41:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:14:56.705 14:41:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:56.705 [global] 00:14:56.705 thread=1 00:14:56.705 invalidate=1 00:14:56.705 rw=write 00:14:56.705 time_based=1 00:14:56.705 runtime=1 00:14:56.705 ioengine=libaio 00:14:56.705 direct=1 00:14:56.705 bs=4096 00:14:56.705 iodepth=1 00:14:56.705 norandommap=0 00:14:56.705 numjobs=1 00:14:56.705 00:14:56.705 verify_dump=1 00:14:56.705 verify_backlog=512 00:14:56.705 verify_state_save=0 00:14:56.705 do_verify=1 00:14:56.705 verify=crc32c-intel 00:14:56.705 [job0] 00:14:56.705 filename=/dev/nvme0n1 00:14:56.705 Could not set queue depth (nvme0n1) 00:14:56.967 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:56.967 fio-3.35 00:14:56.967 Starting 1 thread 00:14:57.910 00:14:57.910 job0: (groupid=0, jobs=1): err= 0: pid=64922: Mon Nov 4 14:41:07 2024 00:14:57.910 read: IOPS=3727, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1000msec) 00:14:57.910 slat (nsec): min=5345, max=53274, avg=6501.23, stdev=3237.91 00:14:57.910 clat (usec): min=99, max=392, avg=146.78, stdev=27.94 00:14:57.910 lat (usec): min=104, max=429, avg=153.28, stdev=30.01 00:14:57.910 clat percentiles (usec): 00:14:57.910 | 1.00th=[ 109], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 128], 00:14:57.910 | 30.00th=[ 133], 40.00th=[ 139], 50.00th=[ 145], 60.00th=[ 149], 00:14:57.910 | 70.00th=[ 155], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 178], 00:14:57.910 | 99.00th=[ 281], 99.50th=[ 326], 99.90th=[ 375], 99.95th=[ 383], 00:14:57.910 | 99.99th=[ 392] 00:14:57.910 write: IOPS=4096, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1000msec); 0 zone resets 00:14:57.910 slat (nsec): min=8752, max=83989, avg=10229.51, stdev=3940.30 00:14:57.910 clat (usec): min=62, max=372, avg=92.73, stdev=28.56 00:14:57.910 lat (usec): min=72, max=405, avg=102.96, stdev=31.21 00:14:57.910 clat percentiles (usec): 00:14:57.910 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 78], 00:14:57.910 | 30.00th=[ 83], 40.00th=[ 87], 50.00th=[ 90], 60.00th=[ 94], 00:14:57.910 | 70.00th=[ 97], 80.00th=[ 101], 90.00th=[ 106], 95.00th=[ 112], 00:14:57.910 | 99.00th=[ 269], 99.50th=[ 322], 99.90th=[ 355], 99.95th=[ 363], 00:14:57.910 | 99.99th=[ 371] 00:14:57.910 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:14:57.910 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:14:57.910 lat (usec) : 100=41.05%, 250=57.64%, 500=1.32% 00:14:57.910 cpu : usr=1.10%, sys=5.60%, ctx=7823, majf=0, minf=5 00:14:57.910 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:57.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:57.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:57.910 issued rwts: total=3727,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:57.910 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:57.910 00:14:57.910 Run status group 0 (all jobs): 00:14:57.910 READ: bw=14.6MiB/s (15.3MB/s), 14.6MiB/s-14.6MiB/s (15.3MB/s-15.3MB/s), io=14.6MiB (15.3MB), run=1000-1000msec 00:14:57.910 WRITE: bw=16.0MiB/s (16.8MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=16.0MiB (16.8MB), run=1000-1000msec 00:14:57.910 00:14:57.910 Disk stats (read/write): 00:14:57.910 nvme0n1: ios=3430/3584, merge=0/0, ticks=511/345, in_queue=856, util=90.86% 00:14:57.910 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:58.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:58.171 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:58.171 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:14:58.171 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:58.171 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:58.171 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:58.171 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:58.171 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:14:58.172 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:58.172 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:58.172 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:58.172 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:14:58.172 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:58.172 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:14:58.172 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:58.172 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:58.172 rmmod nvme_tcp 00:14:58.172 rmmod nvme_fabrics 00:14:58.172 rmmod nvme_keyring 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 64836 ']' 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 64836 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 64836 ']' 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 64836 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64836 00:14:58.433 killing process with pid 64836 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64836' 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 64836 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 64836 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:58.433 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:58.694 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:58.694 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:58.694 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:58.694 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:58.694 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:58.694 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.694 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:58.694 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.694 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:14:58.694 00:14:58.694 real 0m5.848s 00:14:58.694 user 0m18.753s 00:14:58.694 sys 0m1.694s 00:14:58.694 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:58.694 ************************************ 00:14:58.694 END TEST nvmf_nmic 00:14:58.694 ************************************ 00:14:58.694 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:58.694 14:41:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:58.694 14:41:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:58.694 14:41:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:58.694 14:41:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:58.694 ************************************ 00:14:58.694 START TEST nvmf_fio_target 00:14:58.694 ************************************ 00:14:58.694 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:58.956 * Looking for test storage... 00:14:58.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:58.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.956 --rc genhtml_branch_coverage=1 00:14:58.956 --rc genhtml_function_coverage=1 00:14:58.956 --rc genhtml_legend=1 00:14:58.956 --rc geninfo_all_blocks=1 00:14:58.956 --rc geninfo_unexecuted_blocks=1 00:14:58.956 00:14:58.956 ' 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:58.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.956 --rc genhtml_branch_coverage=1 00:14:58.956 --rc genhtml_function_coverage=1 00:14:58.956 --rc genhtml_legend=1 00:14:58.956 --rc geninfo_all_blocks=1 00:14:58.956 --rc geninfo_unexecuted_blocks=1 00:14:58.956 00:14:58.956 ' 00:14:58.956 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:58.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.956 --rc genhtml_branch_coverage=1 00:14:58.956 --rc genhtml_function_coverage=1 00:14:58.956 --rc genhtml_legend=1 00:14:58.956 --rc geninfo_all_blocks=1 00:14:58.957 --rc geninfo_unexecuted_blocks=1 00:14:58.957 00:14:58.957 ' 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:58.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.957 --rc genhtml_branch_coverage=1 00:14:58.957 --rc genhtml_function_coverage=1 00:14:58.957 --rc genhtml_legend=1 00:14:58.957 --rc geninfo_all_blocks=1 00:14:58.957 --rc geninfo_unexecuted_blocks=1 00:14:58.957 00:14:58.957 ' 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:58.957 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:58.957 Cannot find device "nvmf_init_br" 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:58.957 Cannot find device "nvmf_init_br2" 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:58.957 Cannot find device "nvmf_tgt_br" 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:58.957 Cannot find device "nvmf_tgt_br2" 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:14:58.957 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:58.957 Cannot find device "nvmf_init_br" 00:14:58.957 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:14:58.957 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:58.957 Cannot find device "nvmf_init_br2" 00:14:58.957 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:14:58.957 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:58.958 Cannot find device "nvmf_tgt_br" 00:14:58.958 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:14:58.958 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:58.958 Cannot find device "nvmf_tgt_br2" 00:14:58.958 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:14:58.958 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:58.958 Cannot find device "nvmf_br" 00:14:58.958 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:14:58.958 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:58.958 Cannot find device "nvmf_init_if" 00:14:58.958 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:14:58.958 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:58.958 Cannot find device "nvmf_init_if2" 00:14:58.958 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:14:58.958 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:58.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:58.958 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:14:58.958 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:58.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:58.958 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:14:58.958 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:58.958 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:58.958 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:59.217 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:59.218 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:59.218 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:14:59.218 00:14:59.218 --- 10.0.0.3 ping statistics --- 00:14:59.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.218 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:59.218 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:59.218 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:14:59.218 00:14:59.218 --- 10.0.0.4 ping statistics --- 00:14:59.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.218 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:59.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:14:59.218 00:14:59.218 --- 10.0.0.1 ping statistics --- 00:14:59.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.218 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:59.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:14:59.218 00:14:59.218 --- 10.0.0.2 ping statistics --- 00:14:59.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.218 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=65163 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 65163 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 65163 ']' 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.218 14:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:59.218 [2024-11-04 14:41:08.331458] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:14:59.218 [2024-11-04 14:41:08.331517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.478 [2024-11-04 14:41:08.471401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:59.478 [2024-11-04 14:41:08.508087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.478 [2024-11-04 14:41:08.508144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.478 [2024-11-04 14:41:08.508152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.478 [2024-11-04 14:41:08.508158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.478 [2024-11-04 14:41:08.508164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.478 [2024-11-04 14:41:08.508879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.478 [2024-11-04 14:41:08.509335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.478 [2024-11-04 14:41:08.509662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:59.478 [2024-11-04 14:41:08.509700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.478 [2024-11-04 14:41:08.541913] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:00.421 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:00.421 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:15:00.421 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:00.421 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:00.421 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.421 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.421 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:00.421 [2024-11-04 14:41:09.427433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.421 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:00.682 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:00.682 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:00.941 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:00.941 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:01.200 14:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:01.200 14:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:01.200 14:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:01.200 14:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:01.461 14:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:01.725 14:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:01.725 14:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:01.987 14:41:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:01.987 14:41:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:02.247 14:41:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:02.247 14:41:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:02.506 14:41:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:02.766 14:41:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:02.766 14:41:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:03.024 14:41:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:03.024 14:41:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:03.024 14:41:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:03.284 [2024-11-04 14:41:12.307182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:03.284 14:41:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:03.544 14:41:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:03.804 14:41:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid=0c7d476c-d4d7-4594-a48a-578d93697ffa -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:03.804 14:41:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:03.804 14:41:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:15:03.804 14:41:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:03.804 14:41:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:15:03.804 14:41:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:15:03.804 14:41:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:15:06.356 14:41:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:06.356 14:41:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:06.356 14:41:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:06.356 14:41:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:15:06.356 14:41:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:06.356 14:41:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:15:06.356 14:41:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:06.356 [global] 00:15:06.356 thread=1 00:15:06.356 invalidate=1 00:15:06.356 rw=write 00:15:06.356 time_based=1 00:15:06.356 runtime=1 00:15:06.356 ioengine=libaio 00:15:06.356 direct=1 00:15:06.356 bs=4096 00:15:06.356 iodepth=1 00:15:06.356 norandommap=0 00:15:06.356 numjobs=1 00:15:06.356 00:15:06.356 verify_dump=1 00:15:06.356 verify_backlog=512 00:15:06.356 verify_state_save=0 00:15:06.356 do_verify=1 00:15:06.356 verify=crc32c-intel 00:15:06.356 [job0] 00:15:06.356 filename=/dev/nvme0n1 00:15:06.356 [job1] 00:15:06.356 filename=/dev/nvme0n2 00:15:06.356 [job2] 00:15:06.356 filename=/dev/nvme0n3 00:15:06.356 [job3] 00:15:06.356 filename=/dev/nvme0n4 00:15:06.357 Could not set queue depth (nvme0n1) 00:15:06.357 Could not set queue depth (nvme0n2) 00:15:06.357 Could not set queue depth (nvme0n3) 00:15:06.357 Could not set queue depth (nvme0n4) 00:15:06.357 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:06.357 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:06.357 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:06.357 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:06.357 fio-3.35 00:15:06.357 Starting 4 threads 00:15:07.304 00:15:07.304 job0: (groupid=0, jobs=1): err= 0: pid=65337: Mon Nov 4 14:41:16 2024 00:15:07.304 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:07.304 slat (usec): min=5, max=109, avg=14.28, stdev=11.25 00:15:07.304 clat (usec): min=98, max=603, avg=222.80, stdev=80.87 00:15:07.304 lat (usec): min=103, max=627, avg=237.08, stdev=89.06 00:15:07.304 clat percentiles (usec): 00:15:07.304 | 1.00th=[ 105], 5.00th=[ 118], 10.00th=[ 129], 20.00th=[ 157], 00:15:07.304 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 198], 60.00th=[ 215], 00:15:07.304 | 70.00th=[ 258], 80.00th=[ 297], 90.00th=[ 343], 95.00th=[ 371], 00:15:07.304 | 99.00th=[ 445], 99.50th=[ 461], 99.90th=[ 537], 99.95th=[ 570], 00:15:07.304 | 99.99th=[ 603] 00:15:07.304 write: IOPS=2401, BW=9606KiB/s (9837kB/s)(9616KiB/1001msec); 0 zone resets 00:15:07.304 slat (usec): min=7, max=274, avg=25.55, stdev=17.06 00:15:07.304 clat (usec): min=64, max=690, avg=184.57, stdev=112.66 00:15:07.304 lat (usec): min=73, max=707, avg=210.12, stdev=126.20 00:15:07.304 clat percentiles (usec): 00:15:07.304 | 1.00th=[ 71], 5.00th=[ 76], 10.00th=[ 81], 20.00th=[ 87], 00:15:07.304 | 30.00th=[ 94], 40.00th=[ 111], 50.00th=[ 133], 60.00th=[ 147], 00:15:07.304 | 70.00th=[ 289], 80.00th=[ 322], 90.00th=[ 355], 95.00th=[ 375], 00:15:07.304 | 99.00th=[ 424], 99.50th=[ 445], 99.90th=[ 510], 99.95th=[ 519], 00:15:07.304 | 99.99th=[ 693] 00:15:07.304 bw ( KiB/s): min=11592, max=11592, per=28.90%, avg=11592.00, stdev= 0.00, samples=1 00:15:07.304 iops : min= 2898, max= 2898, avg=2898.00, stdev= 0.00, samples=1 00:15:07.304 lat (usec) : 100=19.23%, 250=47.66%, 500=32.88%, 750=0.22% 00:15:07.304 cpu : usr=1.90%, sys=7.20%, ctx=4454, majf=0, minf=13 00:15:07.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.304 issued rwts: total=2048,2404,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.304 job1: (groupid=0, jobs=1): err= 0: pid=65338: Mon Nov 4 14:41:16 2024 00:15:07.304 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:15:07.304 slat (nsec): min=5174, max=81294, avg=11579.87, stdev=11812.86 00:15:07.304 clat (usec): min=90, max=660, avg=189.49, stdev=87.67 00:15:07.304 lat (usec): min=95, max=692, avg=201.07, stdev=97.82 00:15:07.304 clat percentiles (usec): 00:15:07.304 | 1.00th=[ 99], 5.00th=[ 106], 10.00th=[ 113], 20.00th=[ 121], 00:15:07.304 | 30.00th=[ 130], 40.00th=[ 143], 50.00th=[ 167], 60.00th=[ 180], 00:15:07.304 | 70.00th=[ 192], 80.00th=[ 262], 90.00th=[ 330], 95.00th=[ 367], 00:15:07.304 | 99.00th=[ 469], 99.50th=[ 529], 99.90th=[ 603], 99.95th=[ 644], 00:15:07.304 | 99.99th=[ 660] 00:15:07.304 write: IOPS=2993, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec); 0 zone resets 00:15:07.304 slat (usec): min=7, max=152, avg=16.13, stdev=14.76 00:15:07.304 clat (usec): min=60, max=854, avg=142.91, stdev=93.58 00:15:07.304 lat (usec): min=69, max=960, avg=159.04, stdev=105.85 00:15:07.304 clat percentiles (usec): 00:15:07.304 | 1.00th=[ 67], 5.00th=[ 72], 10.00th=[ 76], 20.00th=[ 81], 00:15:07.304 | 30.00th=[ 86], 40.00th=[ 91], 50.00th=[ 99], 60.00th=[ 127], 00:15:07.304 | 70.00th=[ 139], 80.00th=[ 161], 90.00th=[ 318], 95.00th=[ 355], 00:15:07.304 | 99.00th=[ 424], 99.50th=[ 449], 99.90th=[ 519], 99.95th=[ 603], 00:15:07.304 | 99.99th=[ 857] 00:15:07.304 bw ( KiB/s): min= 8192, max= 8192, per=20.42%, avg=8192.00, stdev= 0.00, samples=1 00:15:07.304 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:07.304 lat (usec) : 100=28.17%, 250=53.33%, 500=18.14%, 750=0.34%, 1000=0.02% 00:15:07.304 cpu : usr=1.70%, sys=6.30%, ctx=5568, majf=0, minf=15 00:15:07.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.304 issued rwts: total=2560,2996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.304 job2: (groupid=0, jobs=1): err= 0: pid=65339: Mon Nov 4 14:41:16 2024 00:15:07.304 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:07.304 slat (usec): min=5, max=136, avg=16.69, stdev=15.56 00:15:07.304 clat (usec): min=101, max=646, avg=228.58, stdev=99.27 00:15:07.304 lat (usec): min=106, max=670, avg=245.28, stdev=112.27 00:15:07.304 clat percentiles (usec): 00:15:07.304 | 1.00th=[ 110], 5.00th=[ 117], 10.00th=[ 123], 20.00th=[ 133], 00:15:07.304 | 30.00th=[ 151], 40.00th=[ 180], 50.00th=[ 192], 60.00th=[ 231], 00:15:07.304 | 70.00th=[ 306], 80.00th=[ 334], 90.00th=[ 371], 95.00th=[ 396], 00:15:07.304 | 99.00th=[ 474], 99.50th=[ 537], 99.90th=[ 603], 99.95th=[ 611], 00:15:07.304 | 99.99th=[ 644] 00:15:07.304 write: IOPS=2331, BW=9327KiB/s (9551kB/s)(9336KiB/1001msec); 0 zone resets 00:15:07.304 slat (usec): min=8, max=118, avg=23.96, stdev=18.15 00:15:07.304 clat (usec): min=73, max=2476, avg=185.01, stdev=121.76 00:15:07.304 lat (usec): min=83, max=2538, avg=208.97, stdev=136.66 00:15:07.304 clat percentiles (usec): 00:15:07.304 | 1.00th=[ 77], 5.00th=[ 82], 10.00th=[ 86], 20.00th=[ 91], 00:15:07.304 | 30.00th=[ 97], 40.00th=[ 109], 50.00th=[ 137], 60.00th=[ 149], 00:15:07.304 | 70.00th=[ 269], 80.00th=[ 314], 90.00th=[ 355], 95.00th=[ 388], 00:15:07.304 | 99.00th=[ 445], 99.50th=[ 474], 99.90th=[ 635], 99.95th=[ 1106], 00:15:07.304 | 99.99th=[ 2474] 00:15:07.304 bw ( KiB/s): min=12400, max=12400, per=30.91%, avg=12400.00, stdev= 0.00, samples=1 00:15:07.304 iops : min= 3100, max= 3100, avg=3100.00, stdev= 0.00, samples=1 00:15:07.304 lat (usec) : 100=18.05%, 250=48.22%, 500=33.16%, 750=0.52% 00:15:07.304 lat (msec) : 2=0.02%, 4=0.02% 00:15:07.304 cpu : usr=2.10%, sys=7.00%, ctx=4384, majf=0, minf=9 00:15:07.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.304 issued rwts: total=2048,2334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.304 job3: (groupid=0, jobs=1): err= 0: pid=65340: Mon Nov 4 14:41:16 2024 00:15:07.304 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:07.305 slat (usec): min=5, max=230, avg=13.53, stdev=12.72 00:15:07.305 clat (usec): min=103, max=574, avg=212.86, stdev=76.37 00:15:07.305 lat (usec): min=109, max=602, avg=226.40, stdev=85.46 00:15:07.305 clat percentiles (usec): 00:15:07.305 | 1.00th=[ 112], 5.00th=[ 118], 10.00th=[ 125], 20.00th=[ 139], 00:15:07.305 | 30.00th=[ 178], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 204], 00:15:07.305 | 70.00th=[ 229], 80.00th=[ 285], 90.00th=[ 330], 95.00th=[ 359], 00:15:07.305 | 99.00th=[ 416], 99.50th=[ 453], 99.90th=[ 506], 99.95th=[ 562], 00:15:07.305 | 99.99th=[ 578] 00:15:07.305 write: IOPS=2301, BW=9207KiB/s (9428kB/s)(9216KiB/1001msec); 0 zone resets 00:15:07.305 slat (nsec): min=8983, max=90843, avg=24650.03, stdev=16258.98 00:15:07.305 clat (usec): min=75, max=716, avg=204.33, stdev=112.22 00:15:07.305 lat (usec): min=84, max=761, avg=228.98, stdev=126.19 00:15:07.305 clat percentiles (usec): 00:15:07.305 | 1.00th=[ 81], 5.00th=[ 85], 10.00th=[ 90], 20.00th=[ 97], 00:15:07.305 | 30.00th=[ 110], 40.00th=[ 139], 50.00th=[ 149], 60.00th=[ 192], 00:15:07.305 | 70.00th=[ 306], 80.00th=[ 326], 90.00th=[ 359], 95.00th=[ 383], 00:15:07.305 | 99.00th=[ 437], 99.50th=[ 453], 99.90th=[ 498], 99.95th=[ 685], 00:15:07.305 | 99.99th=[ 717] 00:15:07.305 bw ( KiB/s): min= 9200, max= 9200, per=22.94%, avg=9200.00, stdev= 0.00, samples=1 00:15:07.305 iops : min= 2300, max= 2300, avg=2300.00, stdev= 0.00, samples=1 00:15:07.305 lat (usec) : 100=12.78%, 250=54.37%, 500=32.72%, 750=0.14% 00:15:07.305 cpu : usr=1.80%, sys=6.70%, ctx=4352, majf=0, minf=11 00:15:07.305 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.305 issued rwts: total=2048,2304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.305 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.305 00:15:07.305 Run status group 0 (all jobs): 00:15:07.305 READ: bw=34.0MiB/s (35.6MB/s), 8184KiB/s-9.99MiB/s (8380kB/s-10.5MB/s), io=34.0MiB (35.7MB), run=1001-1001msec 00:15:07.305 WRITE: bw=39.2MiB/s (41.1MB/s), 9207KiB/s-11.7MiB/s (9428kB/s-12.3MB/s), io=39.2MiB (41.1MB), run=1001-1001msec 00:15:07.305 00:15:07.305 Disk stats (read/write): 00:15:07.305 nvme0n1: ios=2098/2081, merge=0/0, ticks=482/352, in_queue=834, util=90.07% 00:15:07.305 nvme0n2: ios=2158/2560, merge=0/0, ticks=434/393, in_queue=827, util=89.62% 00:15:07.305 nvme0n3: ios=2064/2048, merge=0/0, ticks=516/351, in_queue=867, util=90.53% 00:15:07.305 nvme0n4: ios=1675/2048, merge=0/0, ticks=412/425, in_queue=837, util=90.60% 00:15:07.305 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:07.305 [global] 00:15:07.305 thread=1 00:15:07.305 invalidate=1 00:15:07.305 rw=randwrite 00:15:07.305 time_based=1 00:15:07.305 runtime=1 00:15:07.305 ioengine=libaio 00:15:07.305 direct=1 00:15:07.305 bs=4096 00:15:07.305 iodepth=1 00:15:07.305 norandommap=0 00:15:07.305 numjobs=1 00:15:07.305 00:15:07.305 verify_dump=1 00:15:07.305 verify_backlog=512 00:15:07.305 verify_state_save=0 00:15:07.305 do_verify=1 00:15:07.305 verify=crc32c-intel 00:15:07.305 [job0] 00:15:07.305 filename=/dev/nvme0n1 00:15:07.305 [job1] 00:15:07.305 filename=/dev/nvme0n2 00:15:07.305 [job2] 00:15:07.305 filename=/dev/nvme0n3 00:15:07.305 [job3] 00:15:07.305 filename=/dev/nvme0n4 00:15:07.305 Could not set queue depth (nvme0n1) 00:15:07.305 Could not set queue depth (nvme0n2) 00:15:07.305 Could not set queue depth (nvme0n3) 00:15:07.305 Could not set queue depth (nvme0n4) 00:15:07.305 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:07.305 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:07.305 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:07.305 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:07.305 fio-3.35 00:15:07.305 Starting 4 threads 00:15:08.684 00:15:08.684 job0: (groupid=0, jobs=1): err= 0: pid=65399: Mon Nov 4 14:41:17 2024 00:15:08.684 read: IOPS=4546, BW=17.8MiB/s (18.6MB/s)(17.8MiB/1001msec) 00:15:08.684 slat (nsec): min=5129, max=23195, avg=5686.95, stdev=880.02 00:15:08.684 clat (usec): min=91, max=1153, avg=116.37, stdev=20.75 00:15:08.684 lat (usec): min=96, max=1158, avg=122.06, stdev=20.78 00:15:08.684 clat percentiles (usec): 00:15:08.684 | 1.00th=[ 96], 5.00th=[ 100], 10.00th=[ 103], 20.00th=[ 106], 00:15:08.684 | 30.00th=[ 109], 40.00th=[ 112], 50.00th=[ 114], 60.00th=[ 117], 00:15:08.684 | 70.00th=[ 120], 80.00th=[ 125], 90.00th=[ 133], 95.00th=[ 141], 00:15:08.684 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 237], 99.95th=[ 241], 00:15:08.684 | 99.99th=[ 1156] 00:15:08.684 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:15:08.684 slat (nsec): min=6800, max=72081, avg=9881.47, stdev=3056.37 00:15:08.684 clat (usec): min=63, max=330, avg=84.94, stdev=15.21 00:15:08.684 lat (usec): min=72, max=346, avg=94.82, stdev=15.73 00:15:08.684 clat percentiles (usec): 00:15:08.684 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 75], 00:15:08.684 | 30.00th=[ 77], 40.00th=[ 79], 50.00th=[ 82], 60.00th=[ 84], 00:15:08.684 | 70.00th=[ 88], 80.00th=[ 93], 90.00th=[ 101], 95.00th=[ 113], 00:15:08.684 | 99.00th=[ 149], 99.50th=[ 161], 99.90th=[ 178], 99.95th=[ 182], 00:15:08.684 | 99.99th=[ 330] 00:15:08.684 bw ( KiB/s): min=20200, max=20200, per=40.90%, avg=20200.00, stdev= 0.00, samples=1 00:15:08.684 iops : min= 5050, max= 5050, avg=5050.00, stdev= 0.00, samples=1 00:15:08.684 lat (usec) : 100=47.28%, 250=52.69%, 500=0.02% 00:15:08.684 lat (msec) : 2=0.01% 00:15:08.684 cpu : usr=1.10%, sys=6.30%, ctx=9159, majf=0, minf=11 00:15:08.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:08.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.684 issued rwts: total=4551,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:08.684 job1: (groupid=0, jobs=1): err= 0: pid=65400: Mon Nov 4 14:41:17 2024 00:15:08.684 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:15:08.684 slat (nsec): min=5183, max=65007, avg=12010.74, stdev=10688.40 00:15:08.684 clat (usec): min=93, max=1321, avg=198.95, stdev=97.39 00:15:08.684 lat (usec): min=98, max=1329, avg=210.96, stdev=104.46 00:15:08.684 clat percentiles (usec): 00:15:08.684 | 1.00th=[ 100], 5.00th=[ 106], 10.00th=[ 111], 20.00th=[ 118], 00:15:08.684 | 30.00th=[ 129], 40.00th=[ 153], 50.00th=[ 184], 60.00th=[ 200], 00:15:08.684 | 70.00th=[ 219], 80.00th=[ 265], 90.00th=[ 318], 95.00th=[ 367], 00:15:08.684 | 99.00th=[ 529], 99.50th=[ 619], 99.90th=[ 963], 99.95th=[ 1057], 00:15:08.684 | 99.99th=[ 1319] 00:15:08.684 write: IOPS=2866, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec); 0 zone resets 00:15:08.684 slat (usec): min=6, max=124, avg=16.69, stdev=14.88 00:15:08.684 clat (usec): min=60, max=489, avg=140.49, stdev=88.54 00:15:08.684 lat (usec): min=71, max=547, avg=157.18, stdev=99.77 00:15:08.684 clat percentiles (usec): 00:15:08.684 | 1.00th=[ 69], 5.00th=[ 73], 10.00th=[ 76], 20.00th=[ 81], 00:15:08.684 | 30.00th=[ 86], 40.00th=[ 91], 50.00th=[ 99], 60.00th=[ 119], 00:15:08.684 | 70.00th=[ 145], 80.00th=[ 165], 90.00th=[ 310], 95.00th=[ 347], 00:15:08.684 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 461], 99.95th=[ 469], 00:15:08.684 | 99.99th=[ 490] 00:15:08.684 bw ( KiB/s): min=15568, max=15568, per=31.52%, avg=15568.00, stdev= 0.00, samples=1 00:15:08.684 iops : min= 3892, max= 3892, avg=3892.00, stdev= 0.00, samples=1 00:15:08.684 lat (usec) : 100=27.54%, 250=53.64%, 500=18.25%, 750=0.42%, 1000=0.11% 00:15:08.684 lat (msec) : 2=0.04% 00:15:08.684 cpu : usr=1.70%, sys=6.40%, ctx=5441, majf=0, minf=15 00:15:08.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:08.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.684 issued rwts: total=2560,2869,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:08.684 job2: (groupid=0, jobs=1): err= 0: pid=65401: Mon Nov 4 14:41:17 2024 00:15:08.684 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:08.684 slat (nsec): min=5276, max=93916, avg=13327.05, stdev=8660.02 00:15:08.684 clat (usec): min=106, max=1315, avg=264.93, stdev=104.92 00:15:08.684 lat (usec): min=114, max=1324, avg=278.26, stdev=110.75 00:15:08.684 clat percentiles (usec): 00:15:08.684 | 1.00th=[ 117], 5.00th=[ 133], 10.00th=[ 151], 20.00th=[ 190], 00:15:08.684 | 30.00th=[ 202], 40.00th=[ 212], 50.00th=[ 231], 60.00th=[ 281], 00:15:08.684 | 70.00th=[ 318], 80.00th=[ 343], 90.00th=[ 388], 95.00th=[ 424], 00:15:08.684 | 99.00th=[ 594], 99.50th=[ 799], 99.90th=[ 1057], 99.95th=[ 1319], 00:15:08.684 | 99.99th=[ 1319] 00:15:08.684 write: IOPS=1908, BW=7632KiB/s (7816kB/s)(7640KiB/1001msec); 0 zone resets 00:15:08.684 slat (usec): min=8, max=117, avg=32.05, stdev=17.49 00:15:08.684 clat (usec): min=76, max=1721, avg=262.63, stdev=119.48 00:15:08.684 lat (usec): min=85, max=1746, avg=294.68, stdev=131.79 00:15:08.684 clat percentiles (usec): 00:15:08.684 | 1.00th=[ 84], 5.00th=[ 92], 10.00th=[ 100], 20.00th=[ 143], 00:15:08.684 | 30.00th=[ 161], 40.00th=[ 265], 50.00th=[ 297], 60.00th=[ 314], 00:15:08.684 | 70.00th=[ 334], 80.00th=[ 355], 90.00th=[ 388], 95.00th=[ 416], 00:15:08.684 | 99.00th=[ 478], 99.50th=[ 515], 99.90th=[ 1614], 99.95th=[ 1729], 00:15:08.684 | 99.99th=[ 1729] 00:15:08.684 bw ( KiB/s): min= 6960, max= 6960, per=14.09%, avg=6960.00, stdev= 0.00, samples=1 00:15:08.684 iops : min= 1740, max= 1740, avg=1740.00, stdev= 0.00, samples=1 00:15:08.684 lat (usec) : 100=5.57%, 250=39.76%, 500=53.45%, 750=0.87%, 1000=0.23% 00:15:08.684 lat (msec) : 2=0.12% 00:15:08.684 cpu : usr=1.90%, sys=7.10%, ctx=3447, majf=0, minf=15 00:15:08.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:08.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.684 issued rwts: total=1536,1910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:08.684 job3: (groupid=0, jobs=1): err= 0: pid=65402: Mon Nov 4 14:41:17 2024 00:15:08.684 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:15:08.684 slat (usec): min=5, max=3658, avg=10.19, stdev=72.54 00:15:08.684 clat (usec): min=3, max=2864, avg=150.83, stdev=139.40 00:15:08.684 lat (usec): min=107, max=3661, avg=161.02, stdev=158.35 00:15:08.684 clat percentiles (usec): 00:15:08.684 | 1.00th=[ 106], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 117], 00:15:08.684 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 131], 00:15:08.684 | 70.00th=[ 137], 80.00th=[ 145], 90.00th=[ 229], 95.00th=[ 293], 00:15:08.684 | 99.00th=[ 359], 99.50th=[ 392], 99.90th=[ 2769], 99.95th=[ 2868], 00:15:08.684 | 99.99th=[ 2868] 00:15:08.684 write: IOPS=2971, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec); 0 zone resets 00:15:08.684 slat (usec): min=7, max=101, avg=15.83, stdev=12.41 00:15:08.684 clat (usec): min=74, max=1677, avg=179.45, stdev=125.30 00:15:08.684 lat (usec): min=83, max=1688, avg=195.28, stdev=132.36 00:15:08.684 clat percentiles (usec): 00:15:08.684 | 1.00th=[ 79], 5.00th=[ 83], 10.00th=[ 86], 20.00th=[ 90], 00:15:08.684 | 30.00th=[ 94], 40.00th=[ 98], 50.00th=[ 102], 60.00th=[ 113], 00:15:08.684 | 70.00th=[ 269], 80.00th=[ 330], 90.00th=[ 375], 95.00th=[ 408], 00:15:08.684 | 99.00th=[ 457], 99.50th=[ 486], 99.90th=[ 570], 99.95th=[ 709], 00:15:08.684 | 99.99th=[ 1680] 00:15:08.684 bw ( KiB/s): min=12312, max=12312, per=24.93%, avg=12312.00, stdev= 0.00, samples=1 00:15:08.684 iops : min= 3078, max= 3078, avg=3078.00, stdev= 0.00, samples=1 00:15:08.684 lat (usec) : 4=0.02%, 100=24.45%, 250=54.48%, 500=20.74%, 750=0.18% 00:15:08.685 lat (msec) : 2=0.02%, 4=0.11% 00:15:08.685 cpu : usr=1.70%, sys=5.50%, ctx=5535, majf=0, minf=9 00:15:08.685 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:08.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.685 issued rwts: total=2560,2974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.685 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:08.685 00:15:08.685 Run status group 0 (all jobs): 00:15:08.685 READ: bw=43.7MiB/s (45.9MB/s), 6138KiB/s-17.8MiB/s (6285kB/s-18.6MB/s), io=43.8MiB (45.9MB), run=1001-1001msec 00:15:08.685 WRITE: bw=48.2MiB/s (50.6MB/s), 7632KiB/s-18.0MiB/s (7816kB/s-18.9MB/s), io=48.3MiB (50.6MB), run=1001-1001msec 00:15:08.685 00:15:08.685 Disk stats (read/write): 00:15:08.685 nvme0n1: ios=4021/4096, merge=0/0, ticks=490/362, in_queue=852, util=89.78% 00:15:08.685 nvme0n2: ios=2488/2560, merge=0/0, ticks=493/348, in_queue=841, util=89.74% 00:15:08.685 nvme0n3: ios=1586/1581, merge=0/0, ticks=427/395, in_queue=822, util=91.38% 00:15:08.685 nvme0n4: ios=2581/2560, merge=0/0, ticks=413/403, in_queue=816, util=90.54% 00:15:08.685 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:08.685 [global] 00:15:08.685 thread=1 00:15:08.685 invalidate=1 00:15:08.685 rw=write 00:15:08.685 time_based=1 00:15:08.685 runtime=1 00:15:08.685 ioengine=libaio 00:15:08.685 direct=1 00:15:08.685 bs=4096 00:15:08.685 iodepth=128 00:15:08.685 norandommap=0 00:15:08.685 numjobs=1 00:15:08.685 00:15:08.685 verify_dump=1 00:15:08.685 verify_backlog=512 00:15:08.685 verify_state_save=0 00:15:08.685 do_verify=1 00:15:08.685 verify=crc32c-intel 00:15:08.685 [job0] 00:15:08.685 filename=/dev/nvme0n1 00:15:08.685 [job1] 00:15:08.685 filename=/dev/nvme0n2 00:15:08.685 [job2] 00:15:08.685 filename=/dev/nvme0n3 00:15:08.685 [job3] 00:15:08.685 filename=/dev/nvme0n4 00:15:08.685 Could not set queue depth (nvme0n1) 00:15:08.685 Could not set queue depth (nvme0n2) 00:15:08.685 Could not set queue depth (nvme0n3) 00:15:08.685 Could not set queue depth (nvme0n4) 00:15:08.685 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:08.685 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:08.685 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:08.685 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:08.685 fio-3.35 00:15:08.685 Starting 4 threads 00:15:10.107 00:15:10.107 job0: (groupid=0, jobs=1): err= 0: pid=65455: Mon Nov 4 14:41:18 2024 00:15:10.107 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:15:10.107 slat (usec): min=3, max=3442, avg=86.01, stdev=373.65 00:15:10.107 clat (usec): min=7839, max=16935, avg=11074.55, stdev=1122.02 00:15:10.107 lat (usec): min=7854, max=16943, avg=11160.57, stdev=1154.35 00:15:10.107 clat percentiles (usec): 00:15:10.107 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10159], 00:15:10.107 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11076], 60.00th=[11338], 00:15:10.107 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12518], 95.00th=[13042], 00:15:10.107 | 99.00th=[14222], 99.50th=[14484], 99.90th=[14877], 99.95th=[16909], 00:15:10.107 | 99.99th=[16909] 00:15:10.107 write: IOPS=5968, BW=23.3MiB/s (24.4MB/s)(23.5MiB/1007msec); 0 zone resets 00:15:10.107 slat (usec): min=5, max=6102, avg=81.21, stdev=421.69 00:15:10.107 clat (usec): min=3671, max=16644, avg=10799.68, stdev=1124.63 00:15:10.107 lat (usec): min=6992, max=19134, avg=10880.89, stdev=1188.54 00:15:10.107 clat percentiles (usec): 00:15:10.107 | 1.00th=[ 7963], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:15:10.107 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10945], 00:15:10.107 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11994], 95.00th=[13173], 00:15:10.107 | 99.00th=[14353], 99.50th=[15401], 99.90th=[15664], 99.95th=[15664], 00:15:10.107 | 99.99th=[16581] 00:15:10.107 bw ( KiB/s): min=22480, max=24625, per=33.91%, avg=23552.50, stdev=1516.74, samples=2 00:15:10.107 iops : min= 5620, max= 6156, avg=5888.00, stdev=379.01, samples=2 00:15:10.107 lat (msec) : 4=0.01%, 10=16.63%, 20=83.36% 00:15:10.107 cpu : usr=3.28%, sys=10.44%, ctx=489, majf=0, minf=2 00:15:10.107 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:10.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:10.107 issued rwts: total=5632,6010,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.107 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:10.107 job1: (groupid=0, jobs=1): err= 0: pid=65456: Mon Nov 4 14:41:18 2024 00:15:10.107 read: IOPS=2943, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1004msec) 00:15:10.107 slat (usec): min=2, max=8679, avg=174.25, stdev=713.81 00:15:10.107 clat (usec): min=976, max=34361, avg=21070.62, stdev=4664.98 00:15:10.107 lat (usec): min=4735, max=34403, avg=21244.87, stdev=4690.54 00:15:10.107 clat percentiles (usec): 00:15:10.107 | 1.00th=[ 7767], 5.00th=[11863], 10.00th=[12649], 20.00th=[17433], 00:15:10.107 | 30.00th=[20317], 40.00th=[21365], 50.00th=[22414], 60.00th=[22938], 00:15:10.107 | 70.00th=[23462], 80.00th=[24511], 90.00th=[25297], 95.00th=[26608], 00:15:10.107 | 99.00th=[30016], 99.50th=[30802], 99.90th=[32637], 99.95th=[32637], 00:15:10.107 | 99.99th=[34341] 00:15:10.107 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:15:10.107 slat (usec): min=4, max=6348, avg=152.17, stdev=617.98 00:15:10.107 clat (usec): min=8590, max=32530, avg=21019.02, stdev=4992.69 00:15:10.107 lat (usec): min=8736, max=32582, avg=21171.18, stdev=5012.87 00:15:10.107 clat percentiles (usec): 00:15:10.107 | 1.00th=[10945], 5.00th=[11207], 10.00th=[11731], 20.00th=[17433], 00:15:10.107 | 30.00th=[21103], 40.00th=[21627], 50.00th=[21890], 60.00th=[22676], 00:15:10.107 | 70.00th=[23462], 80.00th=[24249], 90.00th=[25822], 95.00th=[27395], 00:15:10.107 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31851], 99.95th=[32637], 00:15:10.107 | 99.99th=[32637] 00:15:10.107 bw ( KiB/s): min=10896, max=13680, per=17.69%, avg=12288.00, stdev=1968.59, samples=2 00:15:10.107 iops : min= 2724, max= 3420, avg=3072.00, stdev=492.15, samples=2 00:15:10.107 lat (usec) : 1000=0.02% 00:15:10.107 lat (msec) : 10=1.11%, 20=25.09%, 50=73.78% 00:15:10.107 cpu : usr=1.20%, sys=6.68%, ctx=761, majf=0, minf=5 00:15:10.107 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:15:10.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:10.107 issued rwts: total=2955,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.107 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:10.107 job2: (groupid=0, jobs=1): err= 0: pid=65457: Mon Nov 4 14:41:18 2024 00:15:10.107 read: IOPS=2929, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1004msec) 00:15:10.107 slat (usec): min=2, max=6056, avg=170.19, stdev=678.48 00:15:10.107 clat (usec): min=440, max=31172, avg=21093.97, stdev=5048.60 00:15:10.107 lat (usec): min=4790, max=31843, avg=21264.15, stdev=5079.54 00:15:10.107 clat percentiles (usec): 00:15:10.107 | 1.00th=[ 7832], 5.00th=[12125], 10.00th=[12911], 20.00th=[15926], 00:15:10.107 | 30.00th=[20317], 40.00th=[21890], 50.00th=[22938], 60.00th=[23462], 00:15:10.107 | 70.00th=[23987], 80.00th=[24773], 90.00th=[25560], 95.00th=[27395], 00:15:10.107 | 99.00th=[29492], 99.50th=[30540], 99.90th=[31065], 99.95th=[31065], 00:15:10.107 | 99.99th=[31065] 00:15:10.107 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:15:10.107 slat (usec): min=3, max=5658, avg=157.93, stdev=620.02 00:15:10.107 clat (usec): min=8679, max=31804, avg=21084.55, stdev=4313.27 00:15:10.107 lat (usec): min=8694, max=31813, avg=21242.47, stdev=4331.56 00:15:10.107 clat percentiles (usec): 00:15:10.107 | 1.00th=[12125], 5.00th=[12649], 10.00th=[13173], 20.00th=[17171], 00:15:10.107 | 30.00th=[21103], 40.00th=[21627], 50.00th=[21890], 60.00th=[22676], 00:15:10.107 | 70.00th=[23200], 80.00th=[23987], 90.00th=[25297], 95.00th=[26608], 00:15:10.107 | 99.00th=[30278], 99.50th=[30540], 99.90th=[31589], 99.95th=[31589], 00:15:10.107 | 99.99th=[31851] 00:15:10.107 bw ( KiB/s): min=11200, max=13402, per=17.71%, avg=12301.00, stdev=1557.05, samples=2 00:15:10.108 iops : min= 2800, max= 3350, avg=3075.00, stdev=388.91, samples=2 00:15:10.108 lat (usec) : 500=0.02% 00:15:10.108 lat (msec) : 10=0.93%, 20=24.91%, 50=74.14% 00:15:10.108 cpu : usr=1.99%, sys=6.38%, ctx=781, majf=0, minf=1 00:15:10.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:15:10.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:10.108 issued rwts: total=2941,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:10.108 job3: (groupid=0, jobs=1): err= 0: pid=65458: Mon Nov 4 14:41:18 2024 00:15:10.108 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:15:10.108 slat (usec): min=3, max=6323, avg=99.03, stdev=543.99 00:15:10.108 clat (usec): min=6973, max=19210, avg=12418.03, stdev=1525.31 00:15:10.108 lat (usec): min=7965, max=21982, avg=12517.06, stdev=1573.14 00:15:10.108 clat percentiles (usec): 00:15:10.108 | 1.00th=[ 8455], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[11469], 00:15:10.108 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12518], 60.00th=[12649], 00:15:10.108 | 70.00th=[12911], 80.00th=[13435], 90.00th=[13960], 95.00th=[15139], 00:15:10.108 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18220], 99.95th=[18744], 00:15:10.108 | 99.99th=[19268] 00:15:10.108 write: IOPS=5302, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1005msec); 0 zone resets 00:15:10.108 slat (usec): min=6, max=6121, avg=88.07, stdev=474.54 00:15:10.108 clat (usec): min=416, max=18816, avg=11906.15, stdev=1644.04 00:15:10.108 lat (usec): min=5151, max=18854, avg=11994.22, stdev=1694.42 00:15:10.108 clat percentiles (usec): 00:15:10.108 | 1.00th=[ 6128], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[10814], 00:15:10.108 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:15:10.108 | 70.00th=[12518], 80.00th=[12649], 90.00th=[13566], 95.00th=[14353], 00:15:10.108 | 99.00th=[16909], 99.50th=[17433], 99.90th=[17957], 99.95th=[18482], 00:15:10.108 | 99.99th=[18744] 00:15:10.108 bw ( KiB/s): min=19600, max=22052, per=29.99%, avg=20826.00, stdev=1733.83, samples=2 00:15:10.108 iops : min= 4900, max= 5513, avg=5206.50, stdev=433.46, samples=2 00:15:10.108 lat (usec) : 500=0.01% 00:15:10.108 lat (msec) : 10=6.20%, 20=93.79% 00:15:10.108 cpu : usr=3.29%, sys=9.26%, ctx=517, majf=0, minf=5 00:15:10.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:10.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:10.108 issued rwts: total=5120,5329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:10.108 00:15:10.108 Run status group 0 (all jobs): 00:15:10.108 READ: bw=64.6MiB/s (67.7MB/s), 11.4MiB/s-21.8MiB/s (12.0MB/s-22.9MB/s), io=65.0MiB (68.2MB), run=1004-1007msec 00:15:10.108 WRITE: bw=67.8MiB/s (71.1MB/s), 12.0MiB/s-23.3MiB/s (12.5MB/s-24.4MB/s), io=68.3MiB (71.6MB), run=1004-1007msec 00:15:10.108 00:15:10.108 Disk stats (read/write): 00:15:10.108 nvme0n1: ios=5164/5136, merge=0/0, ticks=18112/15808, in_queue=33920, util=89.08% 00:15:10.108 nvme0n2: ios=2609/2818, merge=0/0, ticks=16923/16870, in_queue=33793, util=89.13% 00:15:10.108 nvme0n3: ios=2606/2781, merge=0/0, ticks=17791/17118, in_queue=34909, util=91.47% 00:15:10.108 nvme0n4: ios=4557/4608, merge=0/0, ticks=26940/24545, in_queue=51485, util=91.50% 00:15:10.108 14:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:10.108 [global] 00:15:10.108 thread=1 00:15:10.108 invalidate=1 00:15:10.108 rw=randwrite 00:15:10.108 time_based=1 00:15:10.108 runtime=1 00:15:10.108 ioengine=libaio 00:15:10.108 direct=1 00:15:10.108 bs=4096 00:15:10.108 iodepth=128 00:15:10.108 norandommap=0 00:15:10.108 numjobs=1 00:15:10.108 00:15:10.108 verify_dump=1 00:15:10.108 verify_backlog=512 00:15:10.108 verify_state_save=0 00:15:10.108 do_verify=1 00:15:10.108 verify=crc32c-intel 00:15:10.108 [job0] 00:15:10.108 filename=/dev/nvme0n1 00:15:10.108 [job1] 00:15:10.108 filename=/dev/nvme0n2 00:15:10.108 [job2] 00:15:10.108 filename=/dev/nvme0n3 00:15:10.108 [job3] 00:15:10.108 filename=/dev/nvme0n4 00:15:10.108 Could not set queue depth (nvme0n1) 00:15:10.108 Could not set queue depth (nvme0n2) 00:15:10.108 Could not set queue depth (nvme0n3) 00:15:10.108 Could not set queue depth (nvme0n4) 00:15:10.108 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:10.108 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:10.108 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:10.108 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:10.108 fio-3.35 00:15:10.108 Starting 4 threads 00:15:11.521 00:15:11.521 job0: (groupid=0, jobs=1): err= 0: pid=65519: Mon Nov 4 14:41:20 2024 00:15:11.521 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:15:11.521 slat (usec): min=2, max=8689, avg=84.90, stdev=532.26 00:15:11.521 clat (usec): min=3921, max=21412, avg=11611.65, stdev=1834.20 00:15:11.521 lat (usec): min=3927, max=21923, avg=11696.55, stdev=1853.47 00:15:11.521 clat percentiles (usec): 00:15:11.521 | 1.00th=[ 7046], 5.00th=[ 8717], 10.00th=[10290], 20.00th=[10945], 00:15:11.521 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:15:11.521 | 70.00th=[11863], 80.00th=[12256], 90.00th=[12649], 95.00th=[14091], 00:15:11.521 | 99.00th=[19006], 99.50th=[20055], 99.90th=[20841], 99.95th=[21365], 00:15:11.521 | 99.99th=[21365] 00:15:11.521 write: IOPS=5905, BW=23.1MiB/s (24.2MB/s)(23.2MiB/1004msec); 0 zone resets 00:15:11.521 slat (usec): min=3, max=9413, avg=83.19, stdev=536.98 00:15:11.521 clat (usec): min=3003, max=21347, avg=10424.23, stdev=1836.32 00:15:11.521 lat (usec): min=3020, max=21353, avg=10507.42, stdev=1791.78 00:15:11.521 clat percentiles (usec): 00:15:11.521 | 1.00th=[ 3720], 5.00th=[ 6652], 10.00th=[ 8586], 20.00th=[ 9634], 00:15:11.521 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10552], 60.00th=[10945], 00:15:11.521 | 70.00th=[11076], 80.00th=[11600], 90.00th=[12125], 95.00th=[13042], 00:15:11.521 | 99.00th=[14615], 99.50th=[14615], 99.90th=[15008], 99.95th=[15795], 00:15:11.521 | 99.99th=[21365] 00:15:11.521 bw ( KiB/s): min=21840, max=24576, per=35.03%, avg=23208.00, stdev=1934.64, samples=2 00:15:11.521 iops : min= 5460, max= 6144, avg=5802.00, stdev=483.66, samples=2 00:15:11.521 lat (msec) : 4=0.76%, 10=19.92%, 20=79.03%, 50=0.29% 00:15:11.521 cpu : usr=4.09%, sys=8.97%, ctx=319, majf=0, minf=13 00:15:11.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:11.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:11.521 issued rwts: total=5632,5929,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.521 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:11.521 job1: (groupid=0, jobs=1): err= 0: pid=65520: Mon Nov 4 14:41:20 2024 00:15:11.521 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:15:11.521 slat (usec): min=3, max=9601, avg=84.37, stdev=583.50 00:15:11.521 clat (usec): min=6076, max=21415, avg=11689.20, stdev=1472.36 00:15:11.521 lat (usec): min=6083, max=23411, avg=11773.57, stdev=1490.83 00:15:11.521 clat percentiles (usec): 00:15:11.521 | 1.00th=[ 6980], 5.00th=[10290], 10.00th=[10814], 20.00th=[11207], 00:15:11.521 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:15:11.521 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12518], 95.00th=[13042], 00:15:11.521 | 99.00th=[17695], 99.50th=[18220], 99.90th=[19530], 99.95th=[19530], 00:15:11.521 | 99.99th=[21365] 00:15:11.521 write: IOPS=5686, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1004msec); 0 zone resets 00:15:11.521 slat (usec): min=4, max=11206, avg=88.54, stdev=613.85 00:15:11.521 clat (usec): min=1167, max=22768, avg=10766.95, stdev=1223.03 00:15:11.521 lat (usec): min=6406, max=22798, avg=10855.49, stdev=1138.30 00:15:11.521 clat percentiles (usec): 00:15:11.521 | 1.00th=[ 6390], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:15:11.521 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:15:11.521 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11600], 95.00th=[12780], 00:15:11.521 | 99.00th=[14353], 99.50th=[14746], 99.90th=[16188], 99.95th=[16188], 00:15:11.521 | 99.99th=[22676] 00:15:11.521 bw ( KiB/s): min=20480, max=24576, per=34.00%, avg=22528.00, stdev=2896.31, samples=2 00:15:11.521 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:15:11.521 lat (msec) : 2=0.01%, 10=11.37%, 20=88.58%, 50=0.04% 00:15:11.521 cpu : usr=2.39%, sys=7.78%, ctx=230, majf=0, minf=11 00:15:11.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:11.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:11.522 issued rwts: total=5632,5709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:11.522 job2: (groupid=0, jobs=1): err= 0: pid=65521: Mon Nov 4 14:41:20 2024 00:15:11.522 read: IOPS=2408, BW=9633KiB/s (9865kB/s)(9672KiB/1004msec) 00:15:11.522 slat (usec): min=4, max=16377, avg=217.89, stdev=1062.35 00:15:11.522 clat (usec): min=1607, max=76023, avg=26836.81, stdev=7525.76 00:15:11.522 lat (usec): min=7760, max=76054, avg=27054.70, stdev=7579.78 00:15:11.522 clat percentiles (usec): 00:15:11.522 | 1.00th=[ 8029], 5.00th=[18220], 10.00th=[20579], 20.00th=[22938], 00:15:11.522 | 30.00th=[23462], 40.00th=[23725], 50.00th=[24511], 60.00th=[26346], 00:15:11.522 | 70.00th=[28967], 80.00th=[32900], 90.00th=[33817], 95.00th=[35390], 00:15:11.522 | 99.00th=[61080], 99.50th=[70779], 99.90th=[74974], 99.95th=[74974], 00:15:11.522 | 99.99th=[76022] 00:15:11.522 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:15:11.522 slat (usec): min=6, max=9187, avg=180.16, stdev=957.87 00:15:11.522 clat (usec): min=9563, max=87299, avg=24219.17, stdev=15302.34 00:15:11.522 lat (usec): min=9577, max=87314, avg=24399.32, stdev=15403.26 00:15:11.522 clat percentiles (usec): 00:15:11.522 | 1.00th=[13435], 5.00th=[14615], 10.00th=[14877], 20.00th=[15664], 00:15:11.522 | 30.00th=[15926], 40.00th=[16188], 50.00th=[16450], 60.00th=[19268], 00:15:11.522 | 70.00th=[20055], 80.00th=[27132], 90.00th=[47973], 95.00th=[61080], 00:15:11.522 | 99.00th=[81265], 99.50th=[82314], 99.90th=[87557], 99.95th=[87557], 00:15:11.522 | 99.99th=[87557] 00:15:11.522 bw ( KiB/s): min= 8192, max=12288, per=15.45%, avg=10240.00, stdev=2896.31, samples=2 00:15:11.522 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:15:11.522 lat (msec) : 2=0.02%, 10=0.72%, 20=38.31%, 50=55.24%, 100=5.71% 00:15:11.522 cpu : usr=1.00%, sys=5.38%, ctx=225, majf=0, minf=11 00:15:11.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:15:11.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:11.522 issued rwts: total=2418,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:11.522 job3: (groupid=0, jobs=1): err= 0: pid=65522: Mon Nov 4 14:41:20 2024 00:15:11.522 read: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec) 00:15:11.522 slat (usec): min=3, max=16599, avg=269.85, stdev=1587.58 00:15:11.522 clat (usec): min=14925, max=64826, avg=33133.14, stdev=12067.67 00:15:11.522 lat (usec): min=18651, max=64837, avg=33402.99, stdev=12069.69 00:15:11.522 clat percentiles (usec): 00:15:11.522 | 1.00th=[18744], 5.00th=[20317], 10.00th=[20579], 20.00th=[21365], 00:15:11.522 | 30.00th=[25560], 40.00th=[29754], 50.00th=[30540], 60.00th=[31589], 00:15:11.522 | 70.00th=[33817], 80.00th=[42730], 90.00th=[52167], 95.00th=[58459], 00:15:11.522 | 99.00th=[64750], 99.50th=[64750], 99.90th=[64750], 99.95th=[64750], 00:15:11.522 | 99.99th=[64750] 00:15:11.522 write: IOPS=2425, BW=9703KiB/s (9936kB/s)(9732KiB/1003msec); 0 zone resets 00:15:11.522 slat (usec): min=6, max=15153, avg=179.20, stdev=983.23 00:15:11.522 clat (usec): min=992, max=54658, avg=23405.01, stdev=7937.02 00:15:11.522 lat (usec): min=7132, max=54683, avg=23584.22, stdev=7903.58 00:15:11.522 clat percentiles (usec): 00:15:11.522 | 1.00th=[ 7504], 5.00th=[16188], 10.00th=[16581], 20.00th=[16909], 00:15:11.522 | 30.00th=[19530], 40.00th=[20579], 50.00th=[21890], 60.00th=[22676], 00:15:11.522 | 70.00th=[23987], 80.00th=[26608], 90.00th=[36963], 95.00th=[40109], 00:15:11.522 | 99.00th=[54789], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:15:11.522 | 99.99th=[54789] 00:15:11.522 bw ( KiB/s): min= 8192, max=10248, per=13.92%, avg=9220.00, stdev=1453.81, samples=2 00:15:11.522 iops : min= 2048, max= 2562, avg=2305.00, stdev=363.45, samples=2 00:15:11.522 lat (usec) : 1000=0.02% 00:15:11.522 lat (msec) : 10=0.71%, 20=19.73%, 50=72.62%, 100=6.92% 00:15:11.522 cpu : usr=2.10%, sys=3.79%, ctx=141, majf=0, minf=17 00:15:11.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:15:11.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:11.522 issued rwts: total=2048,2433,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:11.522 00:15:11.522 Run status group 0 (all jobs): 00:15:11.522 READ: bw=61.2MiB/s (64.2MB/s), 8167KiB/s-21.9MiB/s (8364kB/s-23.0MB/s), io=61.4MiB (64.4MB), run=1003-1004msec 00:15:11.522 WRITE: bw=64.7MiB/s (67.8MB/s), 9703KiB/s-23.1MiB/s (9936kB/s-24.2MB/s), io=65.0MiB (68.1MB), run=1003-1004msec 00:15:11.522 00:15:11.522 Disk stats (read/write): 00:15:11.522 nvme0n1: ios=4989/5120, merge=0/0, ticks=54951/50065, in_queue=105016, util=90.17% 00:15:11.522 nvme0n2: ios=4855/5120, merge=0/0, ticks=53582/51966, in_queue=105548, util=90.22% 00:15:11.522 nvme0n3: ios=2092/2236, merge=0/0, ticks=28053/25559, in_queue=53612, util=91.66% 00:15:11.522 nvme0n4: ios=1929/2048, merge=0/0, ticks=15018/11239, in_queue=26257, util=91.73% 00:15:11.522 14:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:11.522 14:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=65541 00:15:11.522 14:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:11.522 14:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:11.522 [global] 00:15:11.522 thread=1 00:15:11.522 invalidate=1 00:15:11.522 rw=read 00:15:11.522 time_based=1 00:15:11.522 runtime=10 00:15:11.522 ioengine=libaio 00:15:11.522 direct=1 00:15:11.522 bs=4096 00:15:11.522 iodepth=1 00:15:11.522 norandommap=1 00:15:11.522 numjobs=1 00:15:11.522 00:15:11.522 [job0] 00:15:11.522 filename=/dev/nvme0n1 00:15:11.522 [job1] 00:15:11.522 filename=/dev/nvme0n2 00:15:11.522 [job2] 00:15:11.522 filename=/dev/nvme0n3 00:15:11.522 [job3] 00:15:11.522 filename=/dev/nvme0n4 00:15:11.522 Could not set queue depth (nvme0n1) 00:15:11.522 Could not set queue depth (nvme0n2) 00:15:11.522 Could not set queue depth (nvme0n3) 00:15:11.522 Could not set queue depth (nvme0n4) 00:15:11.522 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:11.522 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:11.522 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:11.522 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:11.522 fio-3.35 00:15:11.522 Starting 4 threads 00:15:14.818 14:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:14.819 fio: pid=65584, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:14.819 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=33382400, buflen=4096 00:15:14.819 14:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:14.819 fio: pid=65583, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:14.819 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=51961856, buflen=4096 00:15:14.819 14:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:14.819 14:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:15.077 fio: pid=65581, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:15.077 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=753664, buflen=4096 00:15:15.077 14:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:15.077 14:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:15.077 fio: pid=65582, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:15.077 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=5226496, buflen=4096 00:15:15.077 14:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:15.077 14:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:15.077 00:15:15.077 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65581: Mon Nov 4 14:41:24 2024 00:15:15.077 read: IOPS=4890, BW=19.1MiB/s (20.0MB/s)(64.7MiB/3388msec) 00:15:15.077 slat (usec): min=3, max=12481, avg=13.75, stdev=165.47 00:15:15.077 clat (usec): min=2, max=2148, avg=189.74, stdev=80.23 00:15:15.077 lat (usec): min=76, max=12609, avg=203.49, stdev=186.12 00:15:15.077 clat percentiles (usec): 00:15:15.077 | 1.00th=[ 86], 5.00th=[ 120], 10.00th=[ 126], 20.00th=[ 133], 00:15:15.077 | 30.00th=[ 139], 40.00th=[ 149], 50.00th=[ 161], 60.00th=[ 186], 00:15:15.077 | 70.00th=[ 202], 80.00th=[ 223], 90.00th=[ 318], 95.00th=[ 363], 00:15:15.077 | 99.00th=[ 437], 99.50th=[ 465], 99.90th=[ 529], 99.95th=[ 562], 00:15:15.077 | 99.99th=[ 1729] 00:15:15.077 bw ( KiB/s): min=14664, max=24742, per=30.15%, avg=18466.33, stdev=3680.47, samples=6 00:15:15.077 iops : min= 3666, max= 6185, avg=4616.50, stdev=919.95, samples=6 00:15:15.077 lat (usec) : 4=0.01%, 100=2.11%, 250=82.06%, 500=15.63%, 750=0.18% 00:15:15.077 lat (msec) : 2=0.01%, 4=0.01% 00:15:15.077 cpu : usr=1.03%, sys=4.99%, ctx=16583, majf=0, minf=1 00:15:15.077 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:15.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.077 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.077 issued rwts: total=16569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.077 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:15.077 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65582: Mon Nov 4 14:41:24 2024 00:15:15.077 read: IOPS=4911, BW=19.2MiB/s (20.1MB/s)(69.0MiB/3596msec) 00:15:15.077 slat (usec): min=3, max=9703, avg=13.27, stdev=160.59 00:15:15.077 clat (usec): min=67, max=6479, avg=189.09, stdev=140.19 00:15:15.077 lat (usec): min=73, max=9861, avg=202.37, stdev=215.32 00:15:15.077 clat percentiles (usec): 00:15:15.077 | 1.00th=[ 77], 5.00th=[ 85], 10.00th=[ 93], 20.00th=[ 124], 00:15:15.077 | 30.00th=[ 133], 40.00th=[ 141], 50.00th=[ 153], 60.00th=[ 184], 00:15:15.077 | 70.00th=[ 202], 80.00th=[ 229], 90.00th=[ 343], 95.00th=[ 383], 00:15:15.077 | 99.00th=[ 453], 99.50th=[ 486], 99.90th=[ 1663], 99.95th=[ 3458], 00:15:15.077 | 99.99th=[ 5145] 00:15:15.077 bw ( KiB/s): min=15136, max=20736, per=28.43%, avg=17412.00, stdev=1981.36, samples=6 00:15:15.077 iops : min= 3784, max= 5184, avg=4353.00, stdev=495.34, samples=6 00:15:15.077 lat (usec) : 100=12.44%, 250=70.33%, 500=16.89%, 750=0.22%, 1000=0.01% 00:15:15.077 lat (msec) : 2=0.02%, 4=0.07%, 10=0.01% 00:15:15.077 cpu : usr=1.03%, sys=5.26%, ctx=17671, majf=0, minf=2 00:15:15.077 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:15.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.077 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.077 issued rwts: total=17661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.077 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:15.077 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65583: Mon Nov 4 14:41:24 2024 00:15:15.077 read: IOPS=3973, BW=15.5MiB/s (16.3MB/s)(49.6MiB/3193msec) 00:15:15.077 slat (usec): min=5, max=7913, avg=13.72, stdev=98.00 00:15:15.077 clat (usec): min=100, max=2238, avg=236.23, stdev=99.44 00:15:15.077 lat (usec): min=105, max=8056, avg=249.95, stdev=143.42 00:15:15.077 clat percentiles (usec): 00:15:15.077 | 1.00th=[ 110], 5.00th=[ 120], 10.00th=[ 133], 20.00th=[ 161], 00:15:15.077 | 30.00th=[ 180], 40.00th=[ 200], 50.00th=[ 215], 60.00th=[ 227], 00:15:15.077 | 70.00th=[ 249], 80.00th=[ 322], 90.00th=[ 379], 95.00th=[ 416], 00:15:15.077 | 99.00th=[ 506], 99.50th=[ 553], 99.90th=[ 775], 99.95th=[ 1004], 00:15:15.077 | 99.99th=[ 1991] 00:15:15.077 bw ( KiB/s): min=13572, max=19112, per=24.82%, avg=15200.67, stdev=2000.78, samples=6 00:15:15.077 iops : min= 3393, max= 4778, avg=3800.17, stdev=500.20, samples=6 00:15:15.077 lat (usec) : 250=70.13%, 500=28.76%, 750=1.00%, 1000=0.05% 00:15:15.077 lat (msec) : 2=0.05%, 4=0.01% 00:15:15.078 cpu : usr=1.13%, sys=5.17%, ctx=12712, majf=0, minf=2 00:15:15.078 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:15.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.078 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.078 issued rwts: total=12687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.078 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:15.078 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65584: Mon Nov 4 14:41:24 2024 00:15:15.078 read: IOPS=2726, BW=10.7MiB/s (11.2MB/s)(31.8MiB/2989msec) 00:15:15.078 slat (usec): min=5, max=107, avg=23.65, stdev=10.56 00:15:15.078 clat (usec): min=110, max=2128, avg=339.03, stdev=96.30 00:15:15.078 lat (usec): min=117, max=2136, avg=362.69, stdev=103.22 00:15:15.078 clat percentiles (usec): 00:15:15.078 | 1.00th=[ 145], 5.00th=[ 190], 10.00th=[ 210], 20.00th=[ 241], 00:15:15.078 | 30.00th=[ 310], 40.00th=[ 338], 50.00th=[ 355], 60.00th=[ 371], 00:15:15.078 | 70.00th=[ 383], 80.00th=[ 404], 90.00th=[ 437], 95.00th=[ 469], 00:15:15.078 | 99.00th=[ 553], 99.50th=[ 619], 99.90th=[ 906], 99.95th=[ 1012], 00:15:15.078 | 99.99th=[ 2114] 00:15:15.078 bw ( KiB/s): min= 9932, max=11264, per=17.14%, avg=10501.60, stdev=550.81, samples=5 00:15:15.078 iops : min= 2483, max= 2816, avg=2625.40, stdev=137.70, samples=5 00:15:15.078 lat (usec) : 250=21.69%, 500=75.45%, 750=2.61%, 1000=0.16% 00:15:15.078 lat (msec) : 2=0.05%, 4=0.02% 00:15:15.078 cpu : usr=2.21%, sys=6.63%, ctx=8151, majf=0, minf=1 00:15:15.078 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:15.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.078 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.078 issued rwts: total=8151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.078 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:15.078 00:15:15.078 Run status group 0 (all jobs): 00:15:15.078 READ: bw=59.8MiB/s (62.7MB/s), 10.7MiB/s-19.2MiB/s (11.2MB/s-20.1MB/s), io=215MiB (226MB), run=2989-3596msec 00:15:15.078 00:15:15.078 Disk stats (read/write): 00:15:15.078 nvme0n1: ios=16534/0, merge=0/0, ticks=3103/0, in_queue=3103, util=95.68% 00:15:15.078 nvme0n2: ios=15644/0, merge=0/0, ticks=3000/0, in_queue=3000, util=95.50% 00:15:15.078 nvme0n3: ios=12225/0, merge=0/0, ticks=2843/0, in_queue=2843, util=96.50% 00:15:15.078 nvme0n4: ios=7672/0, merge=0/0, ticks=2453/0, in_queue=2453, util=96.81% 00:15:15.334 14:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:15.334 14:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:15.593 14:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:15.593 14:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:15.854 14:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:15.854 14:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:16.113 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:16.113 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 65541 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:16.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.402 nvmf hotplug test: fio failed as expected 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:16.402 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:16.731 rmmod nvme_tcp 00:15:16.731 rmmod nvme_fabrics 00:15:16.731 rmmod nvme_keyring 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 65163 ']' 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 65163 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 65163 ']' 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 65163 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65163 00:15:16.731 killing process with pid 65163 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65163' 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 65163 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 65163 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:16.731 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:15:16.992 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:15:16.992 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:16.992 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:16.992 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:16.992 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:16.992 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:16.992 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:16.992 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:16.992 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.992 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:16.992 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:16.992 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:16.992 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:16.992 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:16.992 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:16.992 14:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:16.992 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.992 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.992 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:16.992 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.992 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:16.992 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.992 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:15:16.992 00:15:16.992 real 0m18.319s 00:15:16.992 user 1m9.875s 00:15:16.992 sys 0m7.718s 00:15:16.992 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:16.992 ************************************ 00:15:16.992 END TEST nvmf_fio_target 00:15:16.992 ************************************ 00:15:16.992 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:17.254 ************************************ 00:15:17.254 START TEST nvmf_bdevio 00:15:17.254 ************************************ 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:17.254 * Looking for test storage... 00:15:17.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:17.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.254 --rc genhtml_branch_coverage=1 00:15:17.254 --rc genhtml_function_coverage=1 00:15:17.254 --rc genhtml_legend=1 00:15:17.254 --rc geninfo_all_blocks=1 00:15:17.254 --rc geninfo_unexecuted_blocks=1 00:15:17.254 00:15:17.254 ' 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:17.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.254 --rc genhtml_branch_coverage=1 00:15:17.254 --rc genhtml_function_coverage=1 00:15:17.254 --rc genhtml_legend=1 00:15:17.254 --rc geninfo_all_blocks=1 00:15:17.254 --rc geninfo_unexecuted_blocks=1 00:15:17.254 00:15:17.254 ' 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:17.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.254 --rc genhtml_branch_coverage=1 00:15:17.254 --rc genhtml_function_coverage=1 00:15:17.254 --rc genhtml_legend=1 00:15:17.254 --rc geninfo_all_blocks=1 00:15:17.254 --rc geninfo_unexecuted_blocks=1 00:15:17.254 00:15:17.254 ' 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:17.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.254 --rc genhtml_branch_coverage=1 00:15:17.254 --rc genhtml_function_coverage=1 00:15:17.254 --rc genhtml_legend=1 00:15:17.254 --rc geninfo_all_blocks=1 00:15:17.254 --rc geninfo_unexecuted_blocks=1 00:15:17.254 00:15:17.254 ' 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.254 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:17.255 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:17.255 Cannot find device "nvmf_init_br" 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:17.255 Cannot find device "nvmf_init_br2" 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:17.255 Cannot find device "nvmf_tgt_br" 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:17.255 Cannot find device "nvmf_tgt_br2" 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:17.255 Cannot find device "nvmf_init_br" 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:15:17.255 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:17.515 Cannot find device "nvmf_init_br2" 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:17.515 Cannot find device "nvmf_tgt_br" 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:17.515 Cannot find device "nvmf_tgt_br2" 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:17.515 Cannot find device "nvmf_br" 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:17.515 Cannot find device "nvmf_init_if" 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:17.515 Cannot find device "nvmf_init_if2" 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:17.515 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:17.515 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:17.515 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:17.515 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:15:17.515 00:15:17.515 --- 10.0.0.3 ping statistics --- 00:15:17.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.515 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:17.515 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:17.515 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:15:17.515 00:15:17.515 --- 10.0.0.4 ping statistics --- 00:15:17.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.515 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:17.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:15:17.515 00:15:17.515 --- 10.0.0.1 ping statistics --- 00:15:17.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.515 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:17.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:15:17.515 00:15:17.515 --- 10.0.0.2 ping statistics --- 00:15:17.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.515 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:17.515 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:17.774 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=65897 00:15:17.774 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 65897 00:15:17.774 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 65897 ']' 00:15:17.774 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:17.774 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.774 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:17.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.774 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.774 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:17.774 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:17.774 [2024-11-04 14:41:26.691500] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:15:17.774 [2024-11-04 14:41:26.691562] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.774 [2024-11-04 14:41:26.827071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:17.774 [2024-11-04 14:41:26.858534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.774 [2024-11-04 14:41:26.858570] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.774 [2024-11-04 14:41:26.858576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.774 [2024-11-04 14:41:26.858580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.774 [2024-11-04 14:41:26.858583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.775 [2024-11-04 14:41:26.859355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:17.775 [2024-11-04 14:41:26.859536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:17.775 [2024-11-04 14:41:26.859653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:17.775 [2024-11-04 14:41:26.859729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:17.775 [2024-11-04 14:41:26.888506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:18.708 [2024-11-04 14:41:27.607253] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:18.708 Malloc0 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:18.708 [2024-11-04 14:41:27.659398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:18.708 { 00:15:18.708 "params": { 00:15:18.708 "name": "Nvme$subsystem", 00:15:18.708 "trtype": "$TEST_TRANSPORT", 00:15:18.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:18.708 "adrfam": "ipv4", 00:15:18.708 "trsvcid": "$NVMF_PORT", 00:15:18.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:18.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:18.708 "hdgst": ${hdgst:-false}, 00:15:18.708 "ddgst": ${ddgst:-false} 00:15:18.708 }, 00:15:18.708 "method": "bdev_nvme_attach_controller" 00:15:18.708 } 00:15:18.708 EOF 00:15:18.708 )") 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:15:18.708 14:41:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:18.708 "params": { 00:15:18.708 "name": "Nvme1", 00:15:18.708 "trtype": "tcp", 00:15:18.708 "traddr": "10.0.0.3", 00:15:18.708 "adrfam": "ipv4", 00:15:18.708 "trsvcid": "4420", 00:15:18.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.708 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:18.708 "hdgst": false, 00:15:18.708 "ddgst": false 00:15:18.708 }, 00:15:18.708 "method": "bdev_nvme_attach_controller" 00:15:18.708 }' 00:15:18.708 [2024-11-04 14:41:27.700162] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:15:18.709 [2024-11-04 14:41:27.700222] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65933 ] 00:15:18.709 [2024-11-04 14:41:27.841797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:18.966 [2024-11-04 14:41:27.878345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.966 [2024-11-04 14:41:27.878438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.966 [2024-11-04 14:41:27.878441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.966 [2024-11-04 14:41:27.918065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:18.966 I/O targets: 00:15:18.966 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:18.966 00:15:18.966 00:15:18.966 CUnit - A unit testing framework for C - Version 2.1-3 00:15:18.966 http://cunit.sourceforge.net/ 00:15:18.966 00:15:18.966 00:15:18.966 Suite: bdevio tests on: Nvme1n1 00:15:18.966 Test: blockdev write read block ...passed 00:15:18.966 Test: blockdev write zeroes read block ...passed 00:15:18.966 Test: blockdev write zeroes read no split ...passed 00:15:18.966 Test: blockdev write zeroes read split ...passed 00:15:18.966 Test: blockdev write zeroes read split partial ...passed 00:15:18.966 Test: blockdev reset ...[2024-11-04 14:41:28.046691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:18.966 [2024-11-04 14:41:28.046767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f9180 (9): Bad file descriptor 00:15:18.966 [2024-11-04 14:41:28.059667] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:18.966 passed 00:15:18.966 Test: blockdev write read 8 blocks ...passed 00:15:18.966 Test: blockdev write read size > 128k ...passed 00:15:18.966 Test: blockdev write read invalid size ...passed 00:15:18.966 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:18.966 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:18.966 Test: blockdev write read max offset ...passed 00:15:18.966 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:18.966 Test: blockdev writev readv 8 blocks ...passed 00:15:18.966 Test: blockdev writev readv 30 x 1block ...passed 00:15:18.966 Test: blockdev writev readv block ...passed 00:15:18.966 Test: blockdev writev readv size > 128k ...passed 00:15:18.966 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:18.966 Test: blockdev comparev and writev ...[2024-11-04 14:41:28.065415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:18.966 [2024-11-04 14:41:28.065451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.966 [2024-11-04 14:41:28.065465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:18.966 [2024-11-04 14:41:28.065472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:18.966 [2024-11-04 14:41:28.065766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:18.966 [2024-11-04 14:41:28.065780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:18.966 [2024-11-04 14:41:28.065793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:18.966 [2024-11-04 14:41:28.065800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:18.966 [2024-11-04 14:41:28.066095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:18.966 [2024-11-04 14:41:28.066109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:18.966 [2024-11-04 14:41:28.066121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:18.966 [2024-11-04 14:41:28.066127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:18.966 [2024-11-04 14:41:28.066416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:18.966 [2024-11-04 14:41:28.066431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:18.966 [2024-11-04 14:41:28.066442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:18.966 [2024-11-04 14:41:28.066448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:18.966 passed 00:15:18.966 Test: blockdev nvme passthru rw ...passed 00:15:18.966 Test: blockdev nvme passthru vendor specific ...[2024-11-04 14:41:28.067369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:18.966 [2024-11-04 14:41:28.067386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:18.966 [2024-11-04 14:41:28.067461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:18.966 [2024-11-04 14:41:28.067474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:18.966 [2024-11-04 14:41:28.067549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:18.966 [2024-11-04 14:41:28.067557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:18.966 [2024-11-04 14:41:28.067639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:18.966 [2024-11-04 14:41:28.067678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:18.966 passed 00:15:18.966 Test: blockdev nvme admin passthru ...passed 00:15:18.966 Test: blockdev copy ...passed 00:15:18.966 00:15:18.966 Run Summary: Type Total Ran Passed Failed Inactive 00:15:18.966 suites 1 1 n/a 0 0 00:15:18.966 tests 23 23 23 0 0 00:15:18.966 asserts 152 152 152 0 n/a 00:15:18.966 00:15:18.966 Elapsed time = 0.134 seconds 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:19.226 rmmod nvme_tcp 00:15:19.226 rmmod nvme_fabrics 00:15:19.226 rmmod nvme_keyring 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 65897 ']' 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 65897 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 65897 ']' 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 65897 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65897 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:15:19.226 killing process with pid 65897 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65897' 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 65897 00:15:19.226 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 65897 00:15:19.483 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:19.483 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:19.483 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:19.483 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:15:19.483 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:15:19.483 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:19.483 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:15:19.483 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:19.483 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:19.484 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:19.484 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:19.484 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:19.484 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:19.484 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:19.484 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:19.484 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:19.484 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:19.484 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:19.484 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:19.484 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:19.484 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:19.484 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:15:19.741 00:15:19.741 real 0m2.509s 00:15:19.741 user 0m7.427s 00:15:19.741 sys 0m0.639s 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:19.741 ************************************ 00:15:19.741 END TEST nvmf_bdevio 00:15:19.741 ************************************ 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:19.741 ************************************ 00:15:19.741 END TEST nvmf_target_core 00:15:19.741 ************************************ 00:15:19.741 00:15:19.741 real 2m26.937s 00:15:19.741 user 6m30.756s 00:15:19.741 sys 0m39.810s 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:19.741 14:41:28 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:19.741 14:41:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:19.741 14:41:28 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:19.741 14:41:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:19.741 ************************************ 00:15:19.741 START TEST nvmf_target_extra 00:15:19.741 ************************************ 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:19.741 * Looking for test storage... 00:15:19.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:15:19.741 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:15:19.742 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:15:19.742 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:15:19.742 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:15:19.742 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:19.742 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:15:19.742 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:15:19.742 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:19.742 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:19.742 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:15:19.742 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:15:19.742 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:19.742 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:15:19.742 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:20.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.001 --rc genhtml_branch_coverage=1 00:15:20.001 --rc genhtml_function_coverage=1 00:15:20.001 --rc genhtml_legend=1 00:15:20.001 --rc geninfo_all_blocks=1 00:15:20.001 --rc geninfo_unexecuted_blocks=1 00:15:20.001 00:15:20.001 ' 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:20.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.001 --rc genhtml_branch_coverage=1 00:15:20.001 --rc genhtml_function_coverage=1 00:15:20.001 --rc genhtml_legend=1 00:15:20.001 --rc geninfo_all_blocks=1 00:15:20.001 --rc geninfo_unexecuted_blocks=1 00:15:20.001 00:15:20.001 ' 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:20.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.001 --rc genhtml_branch_coverage=1 00:15:20.001 --rc genhtml_function_coverage=1 00:15:20.001 --rc genhtml_legend=1 00:15:20.001 --rc geninfo_all_blocks=1 00:15:20.001 --rc geninfo_unexecuted_blocks=1 00:15:20.001 00:15:20.001 ' 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:20.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.001 --rc genhtml_branch_coverage=1 00:15:20.001 --rc genhtml_function_coverage=1 00:15:20.001 --rc genhtml_legend=1 00:15:20.001 --rc geninfo_all_blocks=1 00:15:20.001 --rc geninfo_unexecuted_blocks=1 00:15:20.001 00:15:20.001 ' 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:20.001 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:20.001 ************************************ 00:15:20.001 START TEST nvmf_auth_target 00:15:20.001 ************************************ 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:20.001 * Looking for test storage... 00:15:20.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:15:20.001 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:20.001 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:20.001 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:20.001 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:20.001 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:20.001 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:20.001 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:20.001 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:20.001 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:20.001 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:20.001 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:20.001 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:20.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.002 --rc genhtml_branch_coverage=1 00:15:20.002 --rc genhtml_function_coverage=1 00:15:20.002 --rc genhtml_legend=1 00:15:20.002 --rc geninfo_all_blocks=1 00:15:20.002 --rc geninfo_unexecuted_blocks=1 00:15:20.002 00:15:20.002 ' 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:20.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.002 --rc genhtml_branch_coverage=1 00:15:20.002 --rc genhtml_function_coverage=1 00:15:20.002 --rc genhtml_legend=1 00:15:20.002 --rc geninfo_all_blocks=1 00:15:20.002 --rc geninfo_unexecuted_blocks=1 00:15:20.002 00:15:20.002 ' 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:20.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.002 --rc genhtml_branch_coverage=1 00:15:20.002 --rc genhtml_function_coverage=1 00:15:20.002 --rc genhtml_legend=1 00:15:20.002 --rc geninfo_all_blocks=1 00:15:20.002 --rc geninfo_unexecuted_blocks=1 00:15:20.002 00:15:20.002 ' 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:20.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.002 --rc genhtml_branch_coverage=1 00:15:20.002 --rc genhtml_function_coverage=1 00:15:20.002 --rc genhtml_legend=1 00:15:20.002 --rc geninfo_all_blocks=1 00:15:20.002 --rc geninfo_unexecuted_blocks=1 00:15:20.002 00:15:20.002 ' 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:20.002 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:20.002 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:20.003 Cannot find device "nvmf_init_br" 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:20.003 Cannot find device "nvmf_init_br2" 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:20.003 Cannot find device "nvmf_tgt_br" 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:20.003 Cannot find device "nvmf_tgt_br2" 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:20.003 Cannot find device "nvmf_init_br" 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:20.003 Cannot find device "nvmf_init_br2" 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:15:20.003 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:20.003 Cannot find device "nvmf_tgt_br" 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:20.264 Cannot find device "nvmf_tgt_br2" 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:20.264 Cannot find device "nvmf_br" 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:20.264 Cannot find device "nvmf_init_if" 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:20.264 Cannot find device "nvmf_init_if2" 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:20.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:20.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:20.264 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:20.264 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:15:20.264 00:15:20.264 --- 10.0.0.3 ping statistics --- 00:15:20.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.264 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:20.264 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:20.264 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:15:20.264 00:15:20.264 --- 10.0.0.4 ping statistics --- 00:15:20.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.264 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:20.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:15:20.264 00:15:20.264 --- 10.0.0.1 ping statistics --- 00:15:20.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.264 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:15:20.264 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:20.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:15:20.265 00:15:20.265 --- 10.0.0.2 ping statistics --- 00:15:20.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.265 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=66206 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 66206 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 66206 ']' 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:20.265 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.199 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:21.199 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:21.199 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:21.199 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:21.199 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.199 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.199 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=66238 00:15:21.199 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:21.199 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:21.199 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:21.199 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:21.200 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:21.200 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:21.200 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:21.200 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:21.200 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:21.200 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e8e57230b5463730d3dbc71994ae0b0544e4d9a5ef1f1e17 00:15:21.200 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:21.200 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.En4 00:15:21.200 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e8e57230b5463730d3dbc71994ae0b0544e4d9a5ef1f1e17 0 00:15:21.200 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e8e57230b5463730d3dbc71994ae0b0544e4d9a5ef1f1e17 0 00:15:21.200 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:21.200 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:21.200 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e8e57230b5463730d3dbc71994ae0b0544e4d9a5ef1f1e17 00:15:21.200 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:21.200 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.En4 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.En4 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.En4 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=94e06d7e8ac14cb51d7977a347782aad80f2b1ce256eb9b0b945536da32665d5 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.fDr 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 94e06d7e8ac14cb51d7977a347782aad80f2b1ce256eb9b0b945536da32665d5 3 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 94e06d7e8ac14cb51d7977a347782aad80f2b1ce256eb9b0b945536da32665d5 3 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=94e06d7e8ac14cb51d7977a347782aad80f2b1ce256eb9b0b945536da32665d5 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.fDr 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.fDr 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.fDr 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f35d1bf4413028b5b7c66e92ea7cc636 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Voq 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f35d1bf4413028b5b7c66e92ea7cc636 1 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f35d1bf4413028b5b7c66e92ea7cc636 1 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f35d1bf4413028b5b7c66e92ea7cc636 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Voq 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Voq 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Voq 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2bcd50bfc884f41b646b39ee55d4f294aabb53bead590a8a 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.eU2 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2bcd50bfc884f41b646b39ee55d4f294aabb53bead590a8a 2 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2bcd50bfc884f41b646b39ee55d4f294aabb53bead590a8a 2 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2bcd50bfc884f41b646b39ee55d4f294aabb53bead590a8a 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.eU2 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.eU2 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.eU2 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1debc232046ba8161feb21ba8edba162c91027b7f40b5ad4 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.I9Q 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1debc232046ba8161feb21ba8edba162c91027b7f40b5ad4 2 00:15:21.458 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1debc232046ba8161feb21ba8edba162c91027b7f40b5ad4 2 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1debc232046ba8161feb21ba8edba162c91027b7f40b5ad4 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.I9Q 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.I9Q 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.I9Q 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3cdfb8b2d94a9bee2fb9ff521c24b09d 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.EiI 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3cdfb8b2d94a9bee2fb9ff521c24b09d 1 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3cdfb8b2d94a9bee2fb9ff521c24b09d 1 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3cdfb8b2d94a9bee2fb9ff521c24b09d 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:21.459 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:21.719 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.EiI 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.EiI 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.EiI 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=87b6058bd14e567ee45e7d1f1c05ff855d3a7084164807b409245a499b62a324 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.rRN 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 87b6058bd14e567ee45e7d1f1c05ff855d3a7084164807b409245a499b62a324 3 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 87b6058bd14e567ee45e7d1f1c05ff855d3a7084164807b409245a499b62a324 3 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=87b6058bd14e567ee45e7d1f1c05ff855d3a7084164807b409245a499b62a324 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.rRN 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.rRN 00:15:21.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.rRN 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 66206 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 66206 ']' 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:21.720 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:21.981 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:21.981 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:21.981 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 66238 /var/tmp/host.sock 00:15:21.981 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 66238 ']' 00:15:21.981 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:15:21.981 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:21.981 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:21.981 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:21.981 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.981 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:21.981 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:21.981 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:21.981 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.981 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.242 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.242 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:22.242 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.En4 00:15:22.242 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.242 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.242 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.242 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.En4 00:15:22.242 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.En4 00:15:22.242 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.fDr ]] 00:15:22.242 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fDr 00:15:22.242 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.242 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.242 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.242 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fDr 00:15:22.242 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fDr 00:15:22.503 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:22.503 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Voq 00:15:22.503 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.503 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.503 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.503 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Voq 00:15:22.503 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Voq 00:15:22.761 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.eU2 ]] 00:15:22.761 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eU2 00:15:22.761 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.761 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.761 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.761 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eU2 00:15:22.761 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eU2 00:15:23.020 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:23.020 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.I9Q 00:15:23.020 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.020 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.020 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.021 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.I9Q 00:15:23.021 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.I9Q 00:15:23.292 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.EiI ]] 00:15:23.292 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EiI 00:15:23.292 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.292 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.292 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.292 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EiI 00:15:23.292 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EiI 00:15:23.292 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:23.292 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.rRN 00:15:23.292 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.293 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.293 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.293 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.rRN 00:15:23.293 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.rRN 00:15:23.551 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:23.551 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:23.551 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:23.551 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.551 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:23.551 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:23.551 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:23.551 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.551 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.551 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:23.551 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:23.551 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.551 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.551 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.551 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.551 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.551 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.552 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.552 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.810 00:15:23.810 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.810 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.810 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.069 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.069 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.069 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.069 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.069 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.069 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.069 { 00:15:24.069 "cntlid": 1, 00:15:24.069 "qid": 0, 00:15:24.069 "state": "enabled", 00:15:24.069 "thread": "nvmf_tgt_poll_group_000", 00:15:24.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:24.069 "listen_address": { 00:15:24.069 "trtype": "TCP", 00:15:24.069 "adrfam": "IPv4", 00:15:24.069 "traddr": "10.0.0.3", 00:15:24.069 "trsvcid": "4420" 00:15:24.069 }, 00:15:24.069 "peer_address": { 00:15:24.069 "trtype": "TCP", 00:15:24.069 "adrfam": "IPv4", 00:15:24.069 "traddr": "10.0.0.1", 00:15:24.069 "trsvcid": "53712" 00:15:24.069 }, 00:15:24.069 "auth": { 00:15:24.069 "state": "completed", 00:15:24.069 "digest": "sha256", 00:15:24.069 "dhgroup": "null" 00:15:24.069 } 00:15:24.069 } 00:15:24.069 ]' 00:15:24.069 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.069 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.069 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.327 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:24.327 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.327 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.327 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.327 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.585 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:15:24.585 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:15:28.766 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.766 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:28.766 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.766 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.767 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.767 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.767 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:28.767 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:28.767 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:28.767 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.767 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.767 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:28.767 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:28.767 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.767 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.767 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.767 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.767 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.767 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.767 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.767 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.024 00:15:29.024 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.024 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.024 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.282 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.282 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.282 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.282 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.282 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.282 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.282 { 00:15:29.282 "cntlid": 3, 00:15:29.282 "qid": 0, 00:15:29.282 "state": "enabled", 00:15:29.282 "thread": "nvmf_tgt_poll_group_000", 00:15:29.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:29.282 "listen_address": { 00:15:29.282 "trtype": "TCP", 00:15:29.282 "adrfam": "IPv4", 00:15:29.282 "traddr": "10.0.0.3", 00:15:29.282 "trsvcid": "4420" 00:15:29.282 }, 00:15:29.282 "peer_address": { 00:15:29.282 "trtype": "TCP", 00:15:29.282 "adrfam": "IPv4", 00:15:29.282 "traddr": "10.0.0.1", 00:15:29.282 "trsvcid": "54528" 00:15:29.282 }, 00:15:29.282 "auth": { 00:15:29.282 "state": "completed", 00:15:29.282 "digest": "sha256", 00:15:29.282 "dhgroup": "null" 00:15:29.282 } 00:15:29.282 } 00:15:29.282 ]' 00:15:29.282 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.282 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.282 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.282 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:29.282 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.541 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.541 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.541 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.541 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:15:29.541 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.474 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.732 00:15:30.732 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.732 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.732 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.990 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.990 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.990 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.990 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.990 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.990 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.990 { 00:15:30.990 "cntlid": 5, 00:15:30.990 "qid": 0, 00:15:30.990 "state": "enabled", 00:15:30.990 "thread": "nvmf_tgt_poll_group_000", 00:15:30.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:30.990 "listen_address": { 00:15:30.990 "trtype": "TCP", 00:15:30.990 "adrfam": "IPv4", 00:15:30.990 "traddr": "10.0.0.3", 00:15:30.990 "trsvcid": "4420" 00:15:30.990 }, 00:15:30.990 "peer_address": { 00:15:30.990 "trtype": "TCP", 00:15:30.990 "adrfam": "IPv4", 00:15:30.990 "traddr": "10.0.0.1", 00:15:30.990 "trsvcid": "54566" 00:15:30.990 }, 00:15:30.990 "auth": { 00:15:30.990 "state": "completed", 00:15:30.990 "digest": "sha256", 00:15:30.990 "dhgroup": "null" 00:15:30.990 } 00:15:30.990 } 00:15:30.990 ]' 00:15:30.990 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.990 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.990 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.990 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:30.990 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.990 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.990 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.990 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.248 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:15:31.248 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:15:31.814 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.814 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:31.814 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.814 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.814 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.814 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.814 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:31.814 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:32.072 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:32.072 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.072 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:32.072 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:32.072 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:32.072 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.072 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:15:32.073 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.073 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.073 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.073 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:32.073 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.073 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.330 00:15:32.330 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.330 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.330 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.587 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.587 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.587 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.587 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.587 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.587 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.587 { 00:15:32.587 "cntlid": 7, 00:15:32.587 "qid": 0, 00:15:32.587 "state": "enabled", 00:15:32.587 "thread": "nvmf_tgt_poll_group_000", 00:15:32.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:32.587 "listen_address": { 00:15:32.587 "trtype": "TCP", 00:15:32.587 "adrfam": "IPv4", 00:15:32.587 "traddr": "10.0.0.3", 00:15:32.587 "trsvcid": "4420" 00:15:32.587 }, 00:15:32.587 "peer_address": { 00:15:32.587 "trtype": "TCP", 00:15:32.587 "adrfam": "IPv4", 00:15:32.587 "traddr": "10.0.0.1", 00:15:32.587 "trsvcid": "54614" 00:15:32.587 }, 00:15:32.587 "auth": { 00:15:32.587 "state": "completed", 00:15:32.587 "digest": "sha256", 00:15:32.587 "dhgroup": "null" 00:15:32.587 } 00:15:32.587 } 00:15:32.587 ]' 00:15:32.587 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.587 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.587 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.587 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:32.587 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.587 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.587 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.587 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.846 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:15:32.846 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:15:33.450 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.451 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:33.451 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.451 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.451 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.451 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.451 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.451 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:33.451 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:33.709 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:33.709 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.709 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.709 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:33.709 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:33.709 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.709 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.709 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.709 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.709 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.709 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.709 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.709 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.967 00:15:33.967 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.967 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.967 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.225 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.225 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.225 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.225 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.225 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.225 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.225 { 00:15:34.225 "cntlid": 9, 00:15:34.225 "qid": 0, 00:15:34.225 "state": "enabled", 00:15:34.225 "thread": "nvmf_tgt_poll_group_000", 00:15:34.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:34.225 "listen_address": { 00:15:34.225 "trtype": "TCP", 00:15:34.225 "adrfam": "IPv4", 00:15:34.225 "traddr": "10.0.0.3", 00:15:34.225 "trsvcid": "4420" 00:15:34.225 }, 00:15:34.225 "peer_address": { 00:15:34.225 "trtype": "TCP", 00:15:34.225 "adrfam": "IPv4", 00:15:34.225 "traddr": "10.0.0.1", 00:15:34.225 "trsvcid": "54638" 00:15:34.225 }, 00:15:34.225 "auth": { 00:15:34.225 "state": "completed", 00:15:34.225 "digest": "sha256", 00:15:34.225 "dhgroup": "ffdhe2048" 00:15:34.225 } 00:15:34.225 } 00:15:34.225 ]' 00:15:34.225 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.225 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.225 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.225 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:34.225 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.225 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.225 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.225 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.483 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:15:34.483 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:15:35.048 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.048 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:35.048 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.048 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.048 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.048 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.048 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:35.048 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:35.304 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:35.304 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.304 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.304 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:35.304 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:35.304 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.305 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.305 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.305 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.305 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.305 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.305 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.305 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.562 00:15:35.562 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.562 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.562 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.820 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.820 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.820 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.820 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.820 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.820 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.820 { 00:15:35.820 "cntlid": 11, 00:15:35.820 "qid": 0, 00:15:35.820 "state": "enabled", 00:15:35.820 "thread": "nvmf_tgt_poll_group_000", 00:15:35.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:35.820 "listen_address": { 00:15:35.820 "trtype": "TCP", 00:15:35.820 "adrfam": "IPv4", 00:15:35.820 "traddr": "10.0.0.3", 00:15:35.820 "trsvcid": "4420" 00:15:35.820 }, 00:15:35.820 "peer_address": { 00:15:35.820 "trtype": "TCP", 00:15:35.820 "adrfam": "IPv4", 00:15:35.820 "traddr": "10.0.0.1", 00:15:35.820 "trsvcid": "54680" 00:15:35.820 }, 00:15:35.820 "auth": { 00:15:35.820 "state": "completed", 00:15:35.820 "digest": "sha256", 00:15:35.820 "dhgroup": "ffdhe2048" 00:15:35.820 } 00:15:35.820 } 00:15:35.820 ]' 00:15:35.820 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.820 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.820 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.820 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:35.820 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.078 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.078 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.078 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.336 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:15:36.336 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:15:36.901 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.901 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:36.901 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.901 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.901 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.901 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.901 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:36.901 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:37.158 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:37.158 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.158 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:37.158 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:37.158 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:37.158 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.158 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.158 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.158 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.158 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.158 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.158 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.158 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.415 00:15:37.415 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.415 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.415 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.673 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.673 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.673 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.673 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.673 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.673 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.673 { 00:15:37.673 "cntlid": 13, 00:15:37.673 "qid": 0, 00:15:37.673 "state": "enabled", 00:15:37.673 "thread": "nvmf_tgt_poll_group_000", 00:15:37.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:37.673 "listen_address": { 00:15:37.673 "trtype": "TCP", 00:15:37.673 "adrfam": "IPv4", 00:15:37.673 "traddr": "10.0.0.3", 00:15:37.673 "trsvcid": "4420" 00:15:37.673 }, 00:15:37.673 "peer_address": { 00:15:37.673 "trtype": "TCP", 00:15:37.673 "adrfam": "IPv4", 00:15:37.673 "traddr": "10.0.0.1", 00:15:37.673 "trsvcid": "54716" 00:15:37.673 }, 00:15:37.673 "auth": { 00:15:37.673 "state": "completed", 00:15:37.673 "digest": "sha256", 00:15:37.673 "dhgroup": "ffdhe2048" 00:15:37.673 } 00:15:37.673 } 00:15:37.673 ]' 00:15:37.673 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.673 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.673 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.673 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:37.673 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.673 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.673 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.673 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.930 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:15:37.930 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:15:38.494 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.494 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:38.494 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.494 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.494 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.494 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.494 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:38.494 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:38.752 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:38.752 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.752 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:38.752 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:38.752 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:38.752 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.752 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:15:38.752 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.752 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.752 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.752 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:38.752 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.752 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:39.009 00:15:39.009 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.009 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.009 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.265 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.265 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.265 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.265 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.265 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.265 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.265 { 00:15:39.265 "cntlid": 15, 00:15:39.265 "qid": 0, 00:15:39.265 "state": "enabled", 00:15:39.265 "thread": "nvmf_tgt_poll_group_000", 00:15:39.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:39.265 "listen_address": { 00:15:39.265 "trtype": "TCP", 00:15:39.265 "adrfam": "IPv4", 00:15:39.265 "traddr": "10.0.0.3", 00:15:39.265 "trsvcid": "4420" 00:15:39.265 }, 00:15:39.265 "peer_address": { 00:15:39.265 "trtype": "TCP", 00:15:39.265 "adrfam": "IPv4", 00:15:39.265 "traddr": "10.0.0.1", 00:15:39.265 "trsvcid": "54742" 00:15:39.265 }, 00:15:39.265 "auth": { 00:15:39.265 "state": "completed", 00:15:39.265 "digest": "sha256", 00:15:39.265 "dhgroup": "ffdhe2048" 00:15:39.265 } 00:15:39.265 } 00:15:39.265 ]' 00:15:39.265 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.265 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.265 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.265 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:39.265 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.265 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.265 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.265 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.830 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:15:39.830 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.395 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.658 00:15:40.658 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.658 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.658 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.917 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.917 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.917 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.917 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.917 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.917 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.917 { 00:15:40.917 "cntlid": 17, 00:15:40.917 "qid": 0, 00:15:40.917 "state": "enabled", 00:15:40.917 "thread": "nvmf_tgt_poll_group_000", 00:15:40.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:40.917 "listen_address": { 00:15:40.917 "trtype": "TCP", 00:15:40.917 "adrfam": "IPv4", 00:15:40.917 "traddr": "10.0.0.3", 00:15:40.917 "trsvcid": "4420" 00:15:40.917 }, 00:15:40.917 "peer_address": { 00:15:40.917 "trtype": "TCP", 00:15:40.917 "adrfam": "IPv4", 00:15:40.917 "traddr": "10.0.0.1", 00:15:40.917 "trsvcid": "43876" 00:15:40.917 }, 00:15:40.917 "auth": { 00:15:40.917 "state": "completed", 00:15:40.917 "digest": "sha256", 00:15:40.917 "dhgroup": "ffdhe3072" 00:15:40.917 } 00:15:40.917 } 00:15:40.917 ]' 00:15:40.917 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.917 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.917 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.917 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:40.917 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.174 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.174 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.174 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.174 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:15:41.174 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:15:41.738 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.738 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:41.738 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.738 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.738 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.738 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.738 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:41.738 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:41.994 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:41.994 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.994 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:41.994 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:41.994 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:41.994 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.994 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.994 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.994 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.995 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.995 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.995 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.995 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.558 00:15:42.558 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.558 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.558 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.558 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.558 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.558 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.558 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.558 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.558 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.558 { 00:15:42.558 "cntlid": 19, 00:15:42.558 "qid": 0, 00:15:42.558 "state": "enabled", 00:15:42.558 "thread": "nvmf_tgt_poll_group_000", 00:15:42.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:42.558 "listen_address": { 00:15:42.558 "trtype": "TCP", 00:15:42.558 "adrfam": "IPv4", 00:15:42.558 "traddr": "10.0.0.3", 00:15:42.558 "trsvcid": "4420" 00:15:42.558 }, 00:15:42.558 "peer_address": { 00:15:42.558 "trtype": "TCP", 00:15:42.558 "adrfam": "IPv4", 00:15:42.558 "traddr": "10.0.0.1", 00:15:42.558 "trsvcid": "43906" 00:15:42.558 }, 00:15:42.558 "auth": { 00:15:42.558 "state": "completed", 00:15:42.558 "digest": "sha256", 00:15:42.558 "dhgroup": "ffdhe3072" 00:15:42.558 } 00:15:42.558 } 00:15:42.558 ]' 00:15:42.558 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.815 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.815 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.815 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:42.815 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.815 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.815 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.815 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.072 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:15:43.072 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:15:43.637 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.637 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:43.637 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.637 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.637 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.637 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.637 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:43.637 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:43.896 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:43.896 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.896 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.896 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:43.896 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:43.896 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.896 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.896 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.896 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.896 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.896 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.896 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.896 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.156 00:15:44.156 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.156 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.156 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.415 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.415 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.415 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.415 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.415 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.415 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.415 { 00:15:44.415 "cntlid": 21, 00:15:44.415 "qid": 0, 00:15:44.415 "state": "enabled", 00:15:44.415 "thread": "nvmf_tgt_poll_group_000", 00:15:44.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:44.415 "listen_address": { 00:15:44.415 "trtype": "TCP", 00:15:44.415 "adrfam": "IPv4", 00:15:44.415 "traddr": "10.0.0.3", 00:15:44.415 "trsvcid": "4420" 00:15:44.415 }, 00:15:44.415 "peer_address": { 00:15:44.415 "trtype": "TCP", 00:15:44.415 "adrfam": "IPv4", 00:15:44.415 "traddr": "10.0.0.1", 00:15:44.415 "trsvcid": "43926" 00:15:44.415 }, 00:15:44.415 "auth": { 00:15:44.415 "state": "completed", 00:15:44.415 "digest": "sha256", 00:15:44.415 "dhgroup": "ffdhe3072" 00:15:44.415 } 00:15:44.415 } 00:15:44.415 ]' 00:15:44.415 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.415 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.415 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.415 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:44.415 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.415 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.415 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.415 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.673 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:15:44.673 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:15:45.248 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.248 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:45.248 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.248 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.248 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.249 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.249 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:45.249 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:45.507 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:45.507 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.507 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:45.507 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:45.507 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:45.507 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.507 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:15:45.507 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.507 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.507 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.507 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:45.507 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.507 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.765 00:15:45.765 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.765 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.765 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.024 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.024 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.024 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.024 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.024 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.024 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.024 { 00:15:46.024 "cntlid": 23, 00:15:46.024 "qid": 0, 00:15:46.024 "state": "enabled", 00:15:46.024 "thread": "nvmf_tgt_poll_group_000", 00:15:46.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:46.024 "listen_address": { 00:15:46.024 "trtype": "TCP", 00:15:46.024 "adrfam": "IPv4", 00:15:46.024 "traddr": "10.0.0.3", 00:15:46.024 "trsvcid": "4420" 00:15:46.024 }, 00:15:46.024 "peer_address": { 00:15:46.024 "trtype": "TCP", 00:15:46.024 "adrfam": "IPv4", 00:15:46.024 "traddr": "10.0.0.1", 00:15:46.024 "trsvcid": "43942" 00:15:46.024 }, 00:15:46.024 "auth": { 00:15:46.024 "state": "completed", 00:15:46.024 "digest": "sha256", 00:15:46.024 "dhgroup": "ffdhe3072" 00:15:46.024 } 00:15:46.024 } 00:15:46.024 ]' 00:15:46.024 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.024 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.024 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.024 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:46.024 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.024 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.024 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.024 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.282 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:15:46.282 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:15:46.848 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.848 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:46.848 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.848 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.848 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.848 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:46.848 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.848 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:46.848 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:47.106 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:47.106 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.106 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:47.106 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:47.106 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:47.106 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.106 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.106 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.106 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.106 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.106 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.106 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.106 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.364 00:15:47.364 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.364 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.364 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.623 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.623 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.623 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.623 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.623 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.623 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.623 { 00:15:47.623 "cntlid": 25, 00:15:47.623 "qid": 0, 00:15:47.623 "state": "enabled", 00:15:47.623 "thread": "nvmf_tgt_poll_group_000", 00:15:47.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:47.623 "listen_address": { 00:15:47.623 "trtype": "TCP", 00:15:47.623 "adrfam": "IPv4", 00:15:47.623 "traddr": "10.0.0.3", 00:15:47.623 "trsvcid": "4420" 00:15:47.623 }, 00:15:47.623 "peer_address": { 00:15:47.623 "trtype": "TCP", 00:15:47.623 "adrfam": "IPv4", 00:15:47.623 "traddr": "10.0.0.1", 00:15:47.623 "trsvcid": "43972" 00:15:47.623 }, 00:15:47.623 "auth": { 00:15:47.623 "state": "completed", 00:15:47.623 "digest": "sha256", 00:15:47.623 "dhgroup": "ffdhe4096" 00:15:47.623 } 00:15:47.623 } 00:15:47.623 ]' 00:15:47.623 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.623 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.623 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.623 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:47.623 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.623 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.623 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.623 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.883 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:15:47.883 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.816 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.075 00:15:49.333 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.333 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.333 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.334 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.334 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.334 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.334 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.334 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.334 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.334 { 00:15:49.334 "cntlid": 27, 00:15:49.334 "qid": 0, 00:15:49.334 "state": "enabled", 00:15:49.334 "thread": "nvmf_tgt_poll_group_000", 00:15:49.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:49.334 "listen_address": { 00:15:49.334 "trtype": "TCP", 00:15:49.334 "adrfam": "IPv4", 00:15:49.334 "traddr": "10.0.0.3", 00:15:49.334 "trsvcid": "4420" 00:15:49.334 }, 00:15:49.334 "peer_address": { 00:15:49.334 "trtype": "TCP", 00:15:49.334 "adrfam": "IPv4", 00:15:49.334 "traddr": "10.0.0.1", 00:15:49.334 "trsvcid": "59052" 00:15:49.334 }, 00:15:49.334 "auth": { 00:15:49.334 "state": "completed", 00:15:49.334 "digest": "sha256", 00:15:49.334 "dhgroup": "ffdhe4096" 00:15:49.334 } 00:15:49.334 } 00:15:49.334 ]' 00:15:49.334 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.334 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.334 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.593 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:49.593 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.593 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.593 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.593 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.852 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:15:49.852 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:15:50.418 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.418 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:50.418 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.418 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.418 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.418 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.418 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:50.418 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:50.675 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:50.675 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.675 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.675 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:50.675 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:50.675 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.675 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.675 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.675 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.675 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.675 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.675 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.675 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.932 00:15:50.932 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.932 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.933 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.933 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.933 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.933 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.933 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.197 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.197 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.197 { 00:15:51.197 "cntlid": 29, 00:15:51.197 "qid": 0, 00:15:51.197 "state": "enabled", 00:15:51.197 "thread": "nvmf_tgt_poll_group_000", 00:15:51.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:51.197 "listen_address": { 00:15:51.197 "trtype": "TCP", 00:15:51.197 "adrfam": "IPv4", 00:15:51.197 "traddr": "10.0.0.3", 00:15:51.197 "trsvcid": "4420" 00:15:51.197 }, 00:15:51.197 "peer_address": { 00:15:51.197 "trtype": "TCP", 00:15:51.197 "adrfam": "IPv4", 00:15:51.197 "traddr": "10.0.0.1", 00:15:51.197 "trsvcid": "59066" 00:15:51.197 }, 00:15:51.197 "auth": { 00:15:51.197 "state": "completed", 00:15:51.197 "digest": "sha256", 00:15:51.197 "dhgroup": "ffdhe4096" 00:15:51.197 } 00:15:51.197 } 00:15:51.197 ]' 00:15:51.197 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.197 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.197 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.197 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:51.197 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.197 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.197 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.197 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.457 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:15:51.457 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:15:52.056 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.056 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:52.056 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.056 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.056 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.056 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.056 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:52.056 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:52.315 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:52.315 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.315 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:52.315 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:52.315 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:52.315 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.315 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:15:52.315 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.315 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.315 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.315 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:52.315 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:52.315 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:52.572 00:15:52.572 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.572 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.572 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.830 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.830 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.830 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.830 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.830 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.830 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.830 { 00:15:52.830 "cntlid": 31, 00:15:52.830 "qid": 0, 00:15:52.830 "state": "enabled", 00:15:52.830 "thread": "nvmf_tgt_poll_group_000", 00:15:52.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:52.830 "listen_address": { 00:15:52.830 "trtype": "TCP", 00:15:52.830 "adrfam": "IPv4", 00:15:52.830 "traddr": "10.0.0.3", 00:15:52.830 "trsvcid": "4420" 00:15:52.830 }, 00:15:52.830 "peer_address": { 00:15:52.830 "trtype": "TCP", 00:15:52.830 "adrfam": "IPv4", 00:15:52.830 "traddr": "10.0.0.1", 00:15:52.830 "trsvcid": "59082" 00:15:52.830 }, 00:15:52.830 "auth": { 00:15:52.830 "state": "completed", 00:15:52.830 "digest": "sha256", 00:15:52.830 "dhgroup": "ffdhe4096" 00:15:52.830 } 00:15:52.830 } 00:15:52.830 ]' 00:15:52.830 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.830 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.830 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.830 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:52.830 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.830 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.830 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.830 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.129 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:15:53.129 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:15:53.694 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.694 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:53.694 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.695 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.695 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.695 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.695 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.695 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:53.695 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:53.952 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:53.952 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.952 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:53.952 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:53.952 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:53.952 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.952 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.952 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.952 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.952 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.952 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.952 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.952 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.255 00:15:54.255 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.255 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.255 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.514 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.514 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.514 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.514 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.515 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.515 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.515 { 00:15:54.515 "cntlid": 33, 00:15:54.515 "qid": 0, 00:15:54.515 "state": "enabled", 00:15:54.515 "thread": "nvmf_tgt_poll_group_000", 00:15:54.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:54.515 "listen_address": { 00:15:54.515 "trtype": "TCP", 00:15:54.515 "adrfam": "IPv4", 00:15:54.515 "traddr": "10.0.0.3", 00:15:54.515 "trsvcid": "4420" 00:15:54.515 }, 00:15:54.515 "peer_address": { 00:15:54.515 "trtype": "TCP", 00:15:54.515 "adrfam": "IPv4", 00:15:54.515 "traddr": "10.0.0.1", 00:15:54.515 "trsvcid": "59100" 00:15:54.515 }, 00:15:54.515 "auth": { 00:15:54.515 "state": "completed", 00:15:54.515 "digest": "sha256", 00:15:54.515 "dhgroup": "ffdhe6144" 00:15:54.515 } 00:15:54.515 } 00:15:54.515 ]' 00:15:54.515 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.515 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.515 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.515 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:54.515 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.515 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.515 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.515 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.773 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:15:54.773 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:15:55.338 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.338 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:55.338 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.338 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.338 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.338 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.338 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:55.338 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:55.595 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:55.595 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.595 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:55.595 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:55.595 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:55.595 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.595 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.595 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.595 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.595 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.595 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.595 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.595 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.852 00:15:55.852 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.852 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.852 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.110 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.110 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.110 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.110 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.110 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.110 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.110 { 00:15:56.110 "cntlid": 35, 00:15:56.110 "qid": 0, 00:15:56.110 "state": "enabled", 00:15:56.110 "thread": "nvmf_tgt_poll_group_000", 00:15:56.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:56.110 "listen_address": { 00:15:56.110 "trtype": "TCP", 00:15:56.110 "adrfam": "IPv4", 00:15:56.110 "traddr": "10.0.0.3", 00:15:56.110 "trsvcid": "4420" 00:15:56.110 }, 00:15:56.110 "peer_address": { 00:15:56.110 "trtype": "TCP", 00:15:56.110 "adrfam": "IPv4", 00:15:56.110 "traddr": "10.0.0.1", 00:15:56.110 "trsvcid": "59120" 00:15:56.110 }, 00:15:56.110 "auth": { 00:15:56.110 "state": "completed", 00:15:56.110 "digest": "sha256", 00:15:56.110 "dhgroup": "ffdhe6144" 00:15:56.110 } 00:15:56.110 } 00:15:56.110 ]' 00:15:56.110 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.110 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.110 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.367 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:56.367 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.367 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.367 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.367 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.368 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:15:56.368 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:15:56.934 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.934 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:56.934 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.934 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.934 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.934 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.934 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:56.934 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:57.191 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:57.191 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.191 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:57.191 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:57.192 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:57.192 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.192 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.192 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.192 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.192 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.192 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.192 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.192 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.756 00:15:57.756 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.756 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.757 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.757 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.757 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.757 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.757 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.757 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.757 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.757 { 00:15:57.757 "cntlid": 37, 00:15:57.757 "qid": 0, 00:15:57.757 "state": "enabled", 00:15:57.757 "thread": "nvmf_tgt_poll_group_000", 00:15:57.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:57.757 "listen_address": { 00:15:57.757 "trtype": "TCP", 00:15:57.757 "adrfam": "IPv4", 00:15:57.757 "traddr": "10.0.0.3", 00:15:57.757 "trsvcid": "4420" 00:15:57.757 }, 00:15:57.757 "peer_address": { 00:15:57.757 "trtype": "TCP", 00:15:57.757 "adrfam": "IPv4", 00:15:57.757 "traddr": "10.0.0.1", 00:15:57.757 "trsvcid": "59146" 00:15:57.757 }, 00:15:57.757 "auth": { 00:15:57.757 "state": "completed", 00:15:57.757 "digest": "sha256", 00:15:57.757 "dhgroup": "ffdhe6144" 00:15:57.757 } 00:15:57.757 } 00:15:57.757 ]' 00:15:58.022 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.022 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.022 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.022 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:58.022 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.022 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.022 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.022 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.303 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:15:58.304 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:15:58.869 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.869 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:15:58.869 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.869 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.869 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.869 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.869 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:58.869 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:59.127 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:59.127 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.127 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:59.127 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:59.127 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:59.127 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.127 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:15:59.127 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.127 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.127 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.127 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:59.127 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:59.127 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:59.389 00:15:59.389 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.389 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.389 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.672 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.672 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.672 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.672 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.673 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.673 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.673 { 00:15:59.673 "cntlid": 39, 00:15:59.673 "qid": 0, 00:15:59.673 "state": "enabled", 00:15:59.673 "thread": "nvmf_tgt_poll_group_000", 00:15:59.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:15:59.673 "listen_address": { 00:15:59.673 "trtype": "TCP", 00:15:59.673 "adrfam": "IPv4", 00:15:59.673 "traddr": "10.0.0.3", 00:15:59.673 "trsvcid": "4420" 00:15:59.673 }, 00:15:59.673 "peer_address": { 00:15:59.673 "trtype": "TCP", 00:15:59.673 "adrfam": "IPv4", 00:15:59.673 "traddr": "10.0.0.1", 00:15:59.673 "trsvcid": "50006" 00:15:59.673 }, 00:15:59.673 "auth": { 00:15:59.673 "state": "completed", 00:15:59.673 "digest": "sha256", 00:15:59.673 "dhgroup": "ffdhe6144" 00:15:59.673 } 00:15:59.673 } 00:15:59.673 ]' 00:15:59.673 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.673 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.673 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.673 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:59.673 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.673 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.673 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.673 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.931 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:15:59.931 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:16:00.496 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.496 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:00.496 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.496 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.496 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.496 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.496 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.496 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:00.496 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:00.753 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:00.753 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.753 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:00.753 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:00.753 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:00.753 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.753 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.753 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.753 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.753 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.753 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.753 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.753 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.323 00:16:01.323 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.323 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.323 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.581 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.581 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.581 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.581 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.581 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.581 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.581 { 00:16:01.581 "cntlid": 41, 00:16:01.581 "qid": 0, 00:16:01.581 "state": "enabled", 00:16:01.581 "thread": "nvmf_tgt_poll_group_000", 00:16:01.581 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:01.581 "listen_address": { 00:16:01.581 "trtype": "TCP", 00:16:01.581 "adrfam": "IPv4", 00:16:01.581 "traddr": "10.0.0.3", 00:16:01.581 "trsvcid": "4420" 00:16:01.581 }, 00:16:01.581 "peer_address": { 00:16:01.581 "trtype": "TCP", 00:16:01.581 "adrfam": "IPv4", 00:16:01.581 "traddr": "10.0.0.1", 00:16:01.581 "trsvcid": "50028" 00:16:01.581 }, 00:16:01.581 "auth": { 00:16:01.581 "state": "completed", 00:16:01.581 "digest": "sha256", 00:16:01.581 "dhgroup": "ffdhe8192" 00:16:01.581 } 00:16:01.581 } 00:16:01.581 ]' 00:16:01.581 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.581 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.581 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.581 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:01.581 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.581 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.581 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.581 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.839 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:16:01.839 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:16:02.405 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.405 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:02.405 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.405 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.405 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.405 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.405 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:02.405 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:02.663 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:02.663 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.663 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:02.663 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:02.663 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:02.663 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.663 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.663 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.663 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.663 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.663 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.663 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.663 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.230 00:16:03.230 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.230 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.230 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.488 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.488 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.488 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.488 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.488 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.488 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.488 { 00:16:03.488 "cntlid": 43, 00:16:03.488 "qid": 0, 00:16:03.488 "state": "enabled", 00:16:03.488 "thread": "nvmf_tgt_poll_group_000", 00:16:03.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:03.488 "listen_address": { 00:16:03.488 "trtype": "TCP", 00:16:03.488 "adrfam": "IPv4", 00:16:03.488 "traddr": "10.0.0.3", 00:16:03.488 "trsvcid": "4420" 00:16:03.488 }, 00:16:03.488 "peer_address": { 00:16:03.488 "trtype": "TCP", 00:16:03.488 "adrfam": "IPv4", 00:16:03.488 "traddr": "10.0.0.1", 00:16:03.488 "trsvcid": "50050" 00:16:03.488 }, 00:16:03.488 "auth": { 00:16:03.488 "state": "completed", 00:16:03.488 "digest": "sha256", 00:16:03.488 "dhgroup": "ffdhe8192" 00:16:03.488 } 00:16:03.488 } 00:16:03.488 ]' 00:16:03.488 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.488 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.488 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.488 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:03.488 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.488 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.488 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.488 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.746 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:16:03.746 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:16:04.312 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.312 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:04.312 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.312 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.312 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.312 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.312 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:04.312 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:04.570 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:04.570 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.570 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:04.570 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:04.570 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:04.570 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.570 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.570 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.570 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.570 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.570 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.570 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.570 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.135 00:16:05.135 14:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.135 14:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.135 14:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.393 14:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.393 14:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.393 14:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.393 14:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.393 14:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.393 14:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.393 { 00:16:05.393 "cntlid": 45, 00:16:05.393 "qid": 0, 00:16:05.393 "state": "enabled", 00:16:05.393 "thread": "nvmf_tgt_poll_group_000", 00:16:05.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:05.393 "listen_address": { 00:16:05.393 "trtype": "TCP", 00:16:05.393 "adrfam": "IPv4", 00:16:05.393 "traddr": "10.0.0.3", 00:16:05.393 "trsvcid": "4420" 00:16:05.393 }, 00:16:05.393 "peer_address": { 00:16:05.393 "trtype": "TCP", 00:16:05.393 "adrfam": "IPv4", 00:16:05.393 "traddr": "10.0.0.1", 00:16:05.393 "trsvcid": "50082" 00:16:05.393 }, 00:16:05.393 "auth": { 00:16:05.393 "state": "completed", 00:16:05.393 "digest": "sha256", 00:16:05.393 "dhgroup": "ffdhe8192" 00:16:05.393 } 00:16:05.393 } 00:16:05.393 ]' 00:16:05.393 14:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.393 14:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.393 14:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.393 14:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:05.393 14:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.393 14:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.394 14:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.394 14:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.651 14:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:16:05.651 14:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:16:06.218 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.219 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:06.219 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.219 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.219 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.219 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.219 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:06.219 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:06.485 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:06.485 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.485 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:06.485 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:06.485 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:06.485 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.485 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:16:06.485 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.485 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.485 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.485 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:06.485 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.485 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.052 00:16:07.052 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.052 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.052 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.052 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.052 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.052 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.052 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.052 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.052 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.052 { 00:16:07.052 "cntlid": 47, 00:16:07.052 "qid": 0, 00:16:07.052 "state": "enabled", 00:16:07.052 "thread": "nvmf_tgt_poll_group_000", 00:16:07.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:07.052 "listen_address": { 00:16:07.052 "trtype": "TCP", 00:16:07.052 "adrfam": "IPv4", 00:16:07.052 "traddr": "10.0.0.3", 00:16:07.052 "trsvcid": "4420" 00:16:07.052 }, 00:16:07.052 "peer_address": { 00:16:07.052 "trtype": "TCP", 00:16:07.052 "adrfam": "IPv4", 00:16:07.052 "traddr": "10.0.0.1", 00:16:07.052 "trsvcid": "50106" 00:16:07.052 }, 00:16:07.052 "auth": { 00:16:07.052 "state": "completed", 00:16:07.052 "digest": "sha256", 00:16:07.052 "dhgroup": "ffdhe8192" 00:16:07.052 } 00:16:07.052 } 00:16:07.052 ]' 00:16:07.052 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.309 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.309 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.309 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:07.309 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.309 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.309 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.309 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.567 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:16:07.567 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.133 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.391 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.391 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.391 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.391 00:16:08.649 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.649 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.649 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.649 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.649 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.649 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.649 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.649 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.649 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.649 { 00:16:08.649 "cntlid": 49, 00:16:08.649 "qid": 0, 00:16:08.649 "state": "enabled", 00:16:08.649 "thread": "nvmf_tgt_poll_group_000", 00:16:08.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:08.649 "listen_address": { 00:16:08.649 "trtype": "TCP", 00:16:08.649 "adrfam": "IPv4", 00:16:08.649 "traddr": "10.0.0.3", 00:16:08.649 "trsvcid": "4420" 00:16:08.649 }, 00:16:08.649 "peer_address": { 00:16:08.649 "trtype": "TCP", 00:16:08.649 "adrfam": "IPv4", 00:16:08.649 "traddr": "10.0.0.1", 00:16:08.649 "trsvcid": "50136" 00:16:08.649 }, 00:16:08.649 "auth": { 00:16:08.649 "state": "completed", 00:16:08.649 "digest": "sha384", 00:16:08.649 "dhgroup": "null" 00:16:08.649 } 00:16:08.649 } 00:16:08.649 ]' 00:16:08.649 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.906 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.906 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.906 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:08.906 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.906 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.906 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.906 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.164 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:16:09.164 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:16:09.759 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.759 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:09.759 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.759 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.759 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.759 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.759 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:09.759 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:10.018 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:10.018 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.018 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:10.018 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:10.018 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:10.018 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.018 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.018 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.018 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.018 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.018 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.018 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.018 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.276 00:16:10.276 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.276 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.276 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.534 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.534 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.534 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.534 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.534 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.534 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.534 { 00:16:10.534 "cntlid": 51, 00:16:10.534 "qid": 0, 00:16:10.534 "state": "enabled", 00:16:10.534 "thread": "nvmf_tgt_poll_group_000", 00:16:10.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:10.534 "listen_address": { 00:16:10.534 "trtype": "TCP", 00:16:10.534 "adrfam": "IPv4", 00:16:10.534 "traddr": "10.0.0.3", 00:16:10.534 "trsvcid": "4420" 00:16:10.534 }, 00:16:10.534 "peer_address": { 00:16:10.534 "trtype": "TCP", 00:16:10.534 "adrfam": "IPv4", 00:16:10.534 "traddr": "10.0.0.1", 00:16:10.534 "trsvcid": "47220" 00:16:10.534 }, 00:16:10.534 "auth": { 00:16:10.534 "state": "completed", 00:16:10.534 "digest": "sha384", 00:16:10.534 "dhgroup": "null" 00:16:10.534 } 00:16:10.534 } 00:16:10.534 ]' 00:16:10.534 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.534 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.534 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.534 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:10.534 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.534 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.534 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.534 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.792 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:16:10.792 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:16:11.358 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.358 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:11.358 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.358 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.358 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.358 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.358 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:11.358 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:11.615 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:11.615 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.615 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.615 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:11.615 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:11.615 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.615 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.615 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.615 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.615 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.615 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.615 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.615 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.874 00:16:11.874 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.874 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.874 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.874 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.874 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.874 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.874 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.131 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.131 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.131 { 00:16:12.131 "cntlid": 53, 00:16:12.131 "qid": 0, 00:16:12.131 "state": "enabled", 00:16:12.131 "thread": "nvmf_tgt_poll_group_000", 00:16:12.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:12.131 "listen_address": { 00:16:12.131 "trtype": "TCP", 00:16:12.131 "adrfam": "IPv4", 00:16:12.131 "traddr": "10.0.0.3", 00:16:12.131 "trsvcid": "4420" 00:16:12.131 }, 00:16:12.131 "peer_address": { 00:16:12.131 "trtype": "TCP", 00:16:12.131 "adrfam": "IPv4", 00:16:12.131 "traddr": "10.0.0.1", 00:16:12.131 "trsvcid": "47232" 00:16:12.131 }, 00:16:12.131 "auth": { 00:16:12.131 "state": "completed", 00:16:12.131 "digest": "sha384", 00:16:12.131 "dhgroup": "null" 00:16:12.131 } 00:16:12.131 } 00:16:12.131 ]' 00:16:12.131 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.131 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.131 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.131 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:12.132 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.132 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.132 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.132 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.389 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:16:12.389 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:16:12.955 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.955 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:12.955 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.955 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.955 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.955 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.955 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:12.955 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:13.213 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:13.213 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.213 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.213 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:13.213 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:13.213 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.213 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:16:13.213 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.213 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.213 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.213 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:13.213 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.213 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.479 00:16:13.479 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.479 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.479 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.738 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.738 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.738 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.738 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.738 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.738 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.738 { 00:16:13.738 "cntlid": 55, 00:16:13.738 "qid": 0, 00:16:13.738 "state": "enabled", 00:16:13.738 "thread": "nvmf_tgt_poll_group_000", 00:16:13.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:13.738 "listen_address": { 00:16:13.738 "trtype": "TCP", 00:16:13.738 "adrfam": "IPv4", 00:16:13.738 "traddr": "10.0.0.3", 00:16:13.738 "trsvcid": "4420" 00:16:13.738 }, 00:16:13.738 "peer_address": { 00:16:13.738 "trtype": "TCP", 00:16:13.738 "adrfam": "IPv4", 00:16:13.738 "traddr": "10.0.0.1", 00:16:13.738 "trsvcid": "47262" 00:16:13.738 }, 00:16:13.738 "auth": { 00:16:13.738 "state": "completed", 00:16:13.738 "digest": "sha384", 00:16:13.738 "dhgroup": "null" 00:16:13.738 } 00:16:13.738 } 00:16:13.738 ]' 00:16:13.738 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.738 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.738 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.738 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:13.738 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.738 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.738 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.738 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.996 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:16:13.996 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.930 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.188 00:16:15.188 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.188 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.188 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.446 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.446 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.446 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.446 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.446 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.446 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.446 { 00:16:15.446 "cntlid": 57, 00:16:15.446 "qid": 0, 00:16:15.446 "state": "enabled", 00:16:15.446 "thread": "nvmf_tgt_poll_group_000", 00:16:15.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:15.446 "listen_address": { 00:16:15.446 "trtype": "TCP", 00:16:15.446 "adrfam": "IPv4", 00:16:15.446 "traddr": "10.0.0.3", 00:16:15.446 "trsvcid": "4420" 00:16:15.446 }, 00:16:15.446 "peer_address": { 00:16:15.446 "trtype": "TCP", 00:16:15.446 "adrfam": "IPv4", 00:16:15.446 "traddr": "10.0.0.1", 00:16:15.446 "trsvcid": "47288" 00:16:15.446 }, 00:16:15.446 "auth": { 00:16:15.446 "state": "completed", 00:16:15.446 "digest": "sha384", 00:16:15.446 "dhgroup": "ffdhe2048" 00:16:15.446 } 00:16:15.446 } 00:16:15.446 ]' 00:16:15.446 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.446 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.447 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.447 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:15.447 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.447 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.447 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.447 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.704 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:16:15.704 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:16:16.638 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.638 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:16.638 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.638 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.638 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.638 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.638 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:16.638 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:16.638 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:16.638 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.638 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.638 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:16.638 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:16.638 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.638 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.638 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.638 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.638 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.639 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.639 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.639 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.898 00:16:16.898 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.898 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.898 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.176 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.176 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.176 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.176 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.176 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.176 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.176 { 00:16:17.176 "cntlid": 59, 00:16:17.176 "qid": 0, 00:16:17.176 "state": "enabled", 00:16:17.176 "thread": "nvmf_tgt_poll_group_000", 00:16:17.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:17.176 "listen_address": { 00:16:17.176 "trtype": "TCP", 00:16:17.176 "adrfam": "IPv4", 00:16:17.176 "traddr": "10.0.0.3", 00:16:17.176 "trsvcid": "4420" 00:16:17.176 }, 00:16:17.176 "peer_address": { 00:16:17.176 "trtype": "TCP", 00:16:17.176 "adrfam": "IPv4", 00:16:17.176 "traddr": "10.0.0.1", 00:16:17.176 "trsvcid": "47298" 00:16:17.176 }, 00:16:17.176 "auth": { 00:16:17.176 "state": "completed", 00:16:17.176 "digest": "sha384", 00:16:17.176 "dhgroup": "ffdhe2048" 00:16:17.176 } 00:16:17.176 } 00:16:17.176 ]' 00:16:17.176 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.176 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.176 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.176 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:17.176 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.176 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.176 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.176 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.433 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:16:17.433 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:16:17.999 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.999 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:17.999 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.999 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.999 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.999 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.999 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:17.999 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:18.264 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:18.264 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.264 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.264 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:18.264 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:18.264 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.264 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.264 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.264 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.264 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.264 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.264 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.264 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.522 00:16:18.522 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.522 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.522 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.780 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.780 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.780 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.780 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.780 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.780 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.780 { 00:16:18.780 "cntlid": 61, 00:16:18.780 "qid": 0, 00:16:18.780 "state": "enabled", 00:16:18.780 "thread": "nvmf_tgt_poll_group_000", 00:16:18.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:18.780 "listen_address": { 00:16:18.780 "trtype": "TCP", 00:16:18.780 "adrfam": "IPv4", 00:16:18.780 "traddr": "10.0.0.3", 00:16:18.780 "trsvcid": "4420" 00:16:18.780 }, 00:16:18.780 "peer_address": { 00:16:18.780 "trtype": "TCP", 00:16:18.780 "adrfam": "IPv4", 00:16:18.780 "traddr": "10.0.0.1", 00:16:18.780 "trsvcid": "47312" 00:16:18.780 }, 00:16:18.780 "auth": { 00:16:18.780 "state": "completed", 00:16:18.780 "digest": "sha384", 00:16:18.780 "dhgroup": "ffdhe2048" 00:16:18.780 } 00:16:18.780 } 00:16:18.780 ]' 00:16:18.780 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.780 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.780 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.780 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:18.780 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.780 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.780 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.780 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.038 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:16:19.038 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:16:19.603 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.603 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:19.603 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.603 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.603 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.603 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.603 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.603 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.861 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:19.861 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.861 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.861 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:19.861 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:19.861 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.861 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:16:19.861 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.861 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.861 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.861 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:19.861 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.861 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.117 00:16:20.117 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.117 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.117 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.375 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.375 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.375 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.375 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.375 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.375 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.375 { 00:16:20.375 "cntlid": 63, 00:16:20.375 "qid": 0, 00:16:20.375 "state": "enabled", 00:16:20.375 "thread": "nvmf_tgt_poll_group_000", 00:16:20.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:20.375 "listen_address": { 00:16:20.375 "trtype": "TCP", 00:16:20.375 "adrfam": "IPv4", 00:16:20.375 "traddr": "10.0.0.3", 00:16:20.375 "trsvcid": "4420" 00:16:20.375 }, 00:16:20.375 "peer_address": { 00:16:20.375 "trtype": "TCP", 00:16:20.375 "adrfam": "IPv4", 00:16:20.375 "traddr": "10.0.0.1", 00:16:20.375 "trsvcid": "49628" 00:16:20.375 }, 00:16:20.375 "auth": { 00:16:20.375 "state": "completed", 00:16:20.375 "digest": "sha384", 00:16:20.375 "dhgroup": "ffdhe2048" 00:16:20.375 } 00:16:20.375 } 00:16:20.375 ]' 00:16:20.375 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.375 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.375 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.375 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:20.375 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.375 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.375 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.375 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.632 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:16:20.632 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:16:21.199 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.199 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:21.199 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.199 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.199 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.199 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.199 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.199 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:21.199 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:21.457 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:21.457 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.457 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:21.457 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:21.457 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:21.457 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.457 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.457 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.457 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.457 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.457 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.457 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.457 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.715 00:16:21.715 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.715 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.715 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.973 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.973 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.973 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.973 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.973 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.973 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.973 { 00:16:21.973 "cntlid": 65, 00:16:21.973 "qid": 0, 00:16:21.973 "state": "enabled", 00:16:21.973 "thread": "nvmf_tgt_poll_group_000", 00:16:21.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:21.973 "listen_address": { 00:16:21.973 "trtype": "TCP", 00:16:21.973 "adrfam": "IPv4", 00:16:21.973 "traddr": "10.0.0.3", 00:16:21.973 "trsvcid": "4420" 00:16:21.973 }, 00:16:21.973 "peer_address": { 00:16:21.973 "trtype": "TCP", 00:16:21.973 "adrfam": "IPv4", 00:16:21.973 "traddr": "10.0.0.1", 00:16:21.973 "trsvcid": "49664" 00:16:21.973 }, 00:16:21.973 "auth": { 00:16:21.973 "state": "completed", 00:16:21.973 "digest": "sha384", 00:16:21.973 "dhgroup": "ffdhe3072" 00:16:21.973 } 00:16:21.973 } 00:16:21.973 ]' 00:16:21.974 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.974 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.974 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.974 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:21.974 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.974 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.974 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.974 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.231 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:16:22.231 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:16:22.796 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.797 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:22.797 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.797 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.797 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.797 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.797 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:22.797 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:23.054 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:23.054 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.054 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:23.054 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:23.054 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:23.054 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.054 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.054 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.054 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.054 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.054 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.055 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.055 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.313 00:16:23.313 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.313 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.313 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.313 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.313 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.313 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.313 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.313 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.313 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.313 { 00:16:23.313 "cntlid": 67, 00:16:23.313 "qid": 0, 00:16:23.313 "state": "enabled", 00:16:23.313 "thread": "nvmf_tgt_poll_group_000", 00:16:23.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:23.313 "listen_address": { 00:16:23.313 "trtype": "TCP", 00:16:23.313 "adrfam": "IPv4", 00:16:23.313 "traddr": "10.0.0.3", 00:16:23.313 "trsvcid": "4420" 00:16:23.313 }, 00:16:23.313 "peer_address": { 00:16:23.313 "trtype": "TCP", 00:16:23.313 "adrfam": "IPv4", 00:16:23.313 "traddr": "10.0.0.1", 00:16:23.313 "trsvcid": "49710" 00:16:23.313 }, 00:16:23.313 "auth": { 00:16:23.313 "state": "completed", 00:16:23.313 "digest": "sha384", 00:16:23.313 "dhgroup": "ffdhe3072" 00:16:23.313 } 00:16:23.313 } 00:16:23.313 ]' 00:16:23.571 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.571 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.571 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.571 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:23.571 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.571 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.571 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.571 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.829 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:16:23.829 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:16:24.395 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.395 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:24.395 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.395 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.395 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.395 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.395 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:24.395 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:24.654 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:24.654 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.654 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:24.654 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:24.654 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:24.654 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.654 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.654 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.654 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.654 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.654 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.654 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.654 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.913 00:16:24.913 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.913 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.913 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.172 14:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.172 14:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.172 14:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.172 14:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.172 14:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.172 14:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.172 { 00:16:25.172 "cntlid": 69, 00:16:25.172 "qid": 0, 00:16:25.172 "state": "enabled", 00:16:25.172 "thread": "nvmf_tgt_poll_group_000", 00:16:25.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:25.172 "listen_address": { 00:16:25.172 "trtype": "TCP", 00:16:25.172 "adrfam": "IPv4", 00:16:25.172 "traddr": "10.0.0.3", 00:16:25.172 "trsvcid": "4420" 00:16:25.172 }, 00:16:25.172 "peer_address": { 00:16:25.172 "trtype": "TCP", 00:16:25.172 "adrfam": "IPv4", 00:16:25.172 "traddr": "10.0.0.1", 00:16:25.172 "trsvcid": "49730" 00:16:25.172 }, 00:16:25.172 "auth": { 00:16:25.172 "state": "completed", 00:16:25.172 "digest": "sha384", 00:16:25.172 "dhgroup": "ffdhe3072" 00:16:25.172 } 00:16:25.172 } 00:16:25.172 ]' 00:16:25.172 14:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.172 14:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.172 14:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.172 14:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.172 14:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.172 14:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.172 14:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.172 14:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.429 14:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:16:25.429 14:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:16:25.995 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.995 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:25.995 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.995 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.253 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.253 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.253 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:26.253 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:26.253 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:26.253 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.253 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:26.253 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:26.253 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:26.253 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.253 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:16:26.253 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.253 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.253 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.253 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:26.253 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.253 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.520 00:16:26.520 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.520 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.520 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.777 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.777 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.777 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.777 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.777 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.777 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.777 { 00:16:26.777 "cntlid": 71, 00:16:26.777 "qid": 0, 00:16:26.777 "state": "enabled", 00:16:26.777 "thread": "nvmf_tgt_poll_group_000", 00:16:26.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:26.777 "listen_address": { 00:16:26.777 "trtype": "TCP", 00:16:26.777 "adrfam": "IPv4", 00:16:26.777 "traddr": "10.0.0.3", 00:16:26.777 "trsvcid": "4420" 00:16:26.777 }, 00:16:26.777 "peer_address": { 00:16:26.777 "trtype": "TCP", 00:16:26.777 "adrfam": "IPv4", 00:16:26.777 "traddr": "10.0.0.1", 00:16:26.777 "trsvcid": "49754" 00:16:26.777 }, 00:16:26.777 "auth": { 00:16:26.777 "state": "completed", 00:16:26.777 "digest": "sha384", 00:16:26.777 "dhgroup": "ffdhe3072" 00:16:26.777 } 00:16:26.777 } 00:16:26.777 ]' 00:16:26.777 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.035 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.035 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.035 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:27.035 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.035 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.035 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.035 14:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.294 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:16:27.294 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:16:27.860 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.860 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:27.860 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.860 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.860 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.860 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.860 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.860 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:27.860 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:28.118 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:28.118 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.118 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:28.118 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:28.118 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:28.118 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.118 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.118 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.118 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.118 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.118 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.118 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.118 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.376 00:16:28.376 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.376 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.376 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.634 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.634 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.634 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.634 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.634 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.634 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.634 { 00:16:28.634 "cntlid": 73, 00:16:28.634 "qid": 0, 00:16:28.634 "state": "enabled", 00:16:28.634 "thread": "nvmf_tgt_poll_group_000", 00:16:28.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:28.634 "listen_address": { 00:16:28.634 "trtype": "TCP", 00:16:28.634 "adrfam": "IPv4", 00:16:28.634 "traddr": "10.0.0.3", 00:16:28.634 "trsvcid": "4420" 00:16:28.634 }, 00:16:28.634 "peer_address": { 00:16:28.634 "trtype": "TCP", 00:16:28.634 "adrfam": "IPv4", 00:16:28.634 "traddr": "10.0.0.1", 00:16:28.634 "trsvcid": "49772" 00:16:28.634 }, 00:16:28.634 "auth": { 00:16:28.634 "state": "completed", 00:16:28.634 "digest": "sha384", 00:16:28.634 "dhgroup": "ffdhe4096" 00:16:28.634 } 00:16:28.634 } 00:16:28.634 ]' 00:16:28.634 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.634 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.634 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.634 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:28.634 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.634 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.634 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.634 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.891 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:16:28.891 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:16:29.456 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.456 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:29.456 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.456 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.456 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.456 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.456 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:29.456 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:29.714 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:29.714 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.714 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:29.714 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:29.714 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:29.714 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.714 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.714 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.714 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.714 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.714 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.714 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.714 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.989 00:16:29.989 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.989 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.989 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.250 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.250 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.250 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.250 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.250 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.250 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.250 { 00:16:30.250 "cntlid": 75, 00:16:30.250 "qid": 0, 00:16:30.250 "state": "enabled", 00:16:30.250 "thread": "nvmf_tgt_poll_group_000", 00:16:30.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:30.250 "listen_address": { 00:16:30.250 "trtype": "TCP", 00:16:30.250 "adrfam": "IPv4", 00:16:30.250 "traddr": "10.0.0.3", 00:16:30.250 "trsvcid": "4420" 00:16:30.250 }, 00:16:30.250 "peer_address": { 00:16:30.250 "trtype": "TCP", 00:16:30.250 "adrfam": "IPv4", 00:16:30.250 "traddr": "10.0.0.1", 00:16:30.250 "trsvcid": "35120" 00:16:30.250 }, 00:16:30.250 "auth": { 00:16:30.250 "state": "completed", 00:16:30.250 "digest": "sha384", 00:16:30.250 "dhgroup": "ffdhe4096" 00:16:30.250 } 00:16:30.250 } 00:16:30.250 ]' 00:16:30.250 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.250 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.250 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.250 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.250 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.250 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.250 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.250 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.515 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:16:30.515 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:16:31.079 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.079 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:31.079 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.079 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.079 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.079 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.079 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:31.079 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:31.336 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:31.336 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.336 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.336 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:31.336 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:31.336 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.336 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.336 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.336 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.336 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.336 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.336 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.336 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.594 00:16:31.594 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.594 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.594 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.854 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.854 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.854 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.854 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.854 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.854 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.854 { 00:16:31.854 "cntlid": 77, 00:16:31.854 "qid": 0, 00:16:31.854 "state": "enabled", 00:16:31.854 "thread": "nvmf_tgt_poll_group_000", 00:16:31.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:31.854 "listen_address": { 00:16:31.854 "trtype": "TCP", 00:16:31.854 "adrfam": "IPv4", 00:16:31.854 "traddr": "10.0.0.3", 00:16:31.854 "trsvcid": "4420" 00:16:31.854 }, 00:16:31.854 "peer_address": { 00:16:31.854 "trtype": "TCP", 00:16:31.854 "adrfam": "IPv4", 00:16:31.854 "traddr": "10.0.0.1", 00:16:31.854 "trsvcid": "35146" 00:16:31.854 }, 00:16:31.854 "auth": { 00:16:31.854 "state": "completed", 00:16:31.854 "digest": "sha384", 00:16:31.854 "dhgroup": "ffdhe4096" 00:16:31.854 } 00:16:31.854 } 00:16:31.854 ]' 00:16:31.854 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.854 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.854 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.854 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:31.854 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.854 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.854 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.854 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.114 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:16:32.114 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:16:32.704 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.704 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:32.704 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.704 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.704 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.704 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.704 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:32.704 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:32.961 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:32.961 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.961 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:32.961 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:32.961 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:32.961 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.961 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:16:32.961 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.961 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.961 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.961 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:32.961 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.961 14:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.218 00:16:33.218 14:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.218 14:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.218 14:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.475 14:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.475 14:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.475 14:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.475 14:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.475 14:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.475 14:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.475 { 00:16:33.475 "cntlid": 79, 00:16:33.475 "qid": 0, 00:16:33.475 "state": "enabled", 00:16:33.475 "thread": "nvmf_tgt_poll_group_000", 00:16:33.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:33.475 "listen_address": { 00:16:33.475 "trtype": "TCP", 00:16:33.475 "adrfam": "IPv4", 00:16:33.475 "traddr": "10.0.0.3", 00:16:33.475 "trsvcid": "4420" 00:16:33.475 }, 00:16:33.475 "peer_address": { 00:16:33.475 "trtype": "TCP", 00:16:33.475 "adrfam": "IPv4", 00:16:33.475 "traddr": "10.0.0.1", 00:16:33.475 "trsvcid": "35172" 00:16:33.475 }, 00:16:33.475 "auth": { 00:16:33.475 "state": "completed", 00:16:33.475 "digest": "sha384", 00:16:33.475 "dhgroup": "ffdhe4096" 00:16:33.475 } 00:16:33.475 } 00:16:33.475 ]' 00:16:33.475 14:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.745 14:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.745 14:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.745 14:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:33.745 14:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.745 14:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.745 14:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.745 14:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.002 14:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:16:34.002 14:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:16:34.567 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.567 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:34.567 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.567 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.567 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.567 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.568 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.568 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:34.568 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:34.568 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:34.568 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.568 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:34.568 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:34.568 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:34.568 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.568 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.568 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.568 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.568 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.568 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.568 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.568 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.134 00:16:35.134 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.134 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.134 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.391 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.391 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.391 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.391 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.391 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.391 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.391 { 00:16:35.391 "cntlid": 81, 00:16:35.391 "qid": 0, 00:16:35.391 "state": "enabled", 00:16:35.391 "thread": "nvmf_tgt_poll_group_000", 00:16:35.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:35.391 "listen_address": { 00:16:35.391 "trtype": "TCP", 00:16:35.391 "adrfam": "IPv4", 00:16:35.391 "traddr": "10.0.0.3", 00:16:35.391 "trsvcid": "4420" 00:16:35.391 }, 00:16:35.391 "peer_address": { 00:16:35.391 "trtype": "TCP", 00:16:35.391 "adrfam": "IPv4", 00:16:35.391 "traddr": "10.0.0.1", 00:16:35.391 "trsvcid": "35208" 00:16:35.391 }, 00:16:35.391 "auth": { 00:16:35.391 "state": "completed", 00:16:35.391 "digest": "sha384", 00:16:35.391 "dhgroup": "ffdhe6144" 00:16:35.391 } 00:16:35.391 } 00:16:35.391 ]' 00:16:35.391 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.391 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:35.391 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.391 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:35.391 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.391 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.391 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.391 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.662 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:16:35.662 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:16:36.228 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.228 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:36.228 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.228 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.228 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.228 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.228 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:36.228 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:36.486 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:36.486 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.486 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:36.486 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:36.486 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:36.486 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.486 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.486 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.486 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.486 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.486 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.486 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.486 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.746 00:16:36.746 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.746 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.746 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.004 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.004 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.004 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.004 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.004 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.004 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.004 { 00:16:37.004 "cntlid": 83, 00:16:37.004 "qid": 0, 00:16:37.004 "state": "enabled", 00:16:37.004 "thread": "nvmf_tgt_poll_group_000", 00:16:37.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:37.004 "listen_address": { 00:16:37.004 "trtype": "TCP", 00:16:37.004 "adrfam": "IPv4", 00:16:37.004 "traddr": "10.0.0.3", 00:16:37.004 "trsvcid": "4420" 00:16:37.004 }, 00:16:37.004 "peer_address": { 00:16:37.004 "trtype": "TCP", 00:16:37.004 "adrfam": "IPv4", 00:16:37.004 "traddr": "10.0.0.1", 00:16:37.004 "trsvcid": "35242" 00:16:37.004 }, 00:16:37.004 "auth": { 00:16:37.004 "state": "completed", 00:16:37.004 "digest": "sha384", 00:16:37.004 "dhgroup": "ffdhe6144" 00:16:37.004 } 00:16:37.004 } 00:16:37.004 ]' 00:16:37.004 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.004 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.004 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.004 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:37.004 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.004 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.004 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.004 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.262 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:16:37.262 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:16:37.828 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.828 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:37.828 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.828 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.828 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.828 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.086 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:38.086 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:38.347 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:38.347 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.347 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:38.347 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:38.347 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:38.347 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.347 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.347 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.347 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.347 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.347 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.347 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.347 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.616 00:16:38.616 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.616 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.616 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.874 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.874 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.874 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.874 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.874 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.874 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.874 { 00:16:38.874 "cntlid": 85, 00:16:38.874 "qid": 0, 00:16:38.874 "state": "enabled", 00:16:38.874 "thread": "nvmf_tgt_poll_group_000", 00:16:38.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:38.874 "listen_address": { 00:16:38.874 "trtype": "TCP", 00:16:38.874 "adrfam": "IPv4", 00:16:38.874 "traddr": "10.0.0.3", 00:16:38.874 "trsvcid": "4420" 00:16:38.874 }, 00:16:38.874 "peer_address": { 00:16:38.874 "trtype": "TCP", 00:16:38.874 "adrfam": "IPv4", 00:16:38.874 "traddr": "10.0.0.1", 00:16:38.874 "trsvcid": "35280" 00:16:38.874 }, 00:16:38.874 "auth": { 00:16:38.874 "state": "completed", 00:16:38.874 "digest": "sha384", 00:16:38.874 "dhgroup": "ffdhe6144" 00:16:38.874 } 00:16:38.874 } 00:16:38.874 ]' 00:16:38.874 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.874 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.874 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.874 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.874 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.874 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.874 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.874 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.131 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:16:39.131 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:16:39.698 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.698 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:39.698 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.698 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.698 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.698 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.698 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:39.698 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:39.957 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:39.957 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.957 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:39.957 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:39.957 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:39.957 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.957 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:16:39.957 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.957 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.957 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.957 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:39.957 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.957 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.523 00:16:40.523 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.523 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.523 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.523 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.523 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.523 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.523 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.523 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.523 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.523 { 00:16:40.523 "cntlid": 87, 00:16:40.523 "qid": 0, 00:16:40.523 "state": "enabled", 00:16:40.523 "thread": "nvmf_tgt_poll_group_000", 00:16:40.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:40.523 "listen_address": { 00:16:40.523 "trtype": "TCP", 00:16:40.523 "adrfam": "IPv4", 00:16:40.523 "traddr": "10.0.0.3", 00:16:40.523 "trsvcid": "4420" 00:16:40.523 }, 00:16:40.523 "peer_address": { 00:16:40.523 "trtype": "TCP", 00:16:40.523 "adrfam": "IPv4", 00:16:40.523 "traddr": "10.0.0.1", 00:16:40.523 "trsvcid": "42150" 00:16:40.523 }, 00:16:40.523 "auth": { 00:16:40.523 "state": "completed", 00:16:40.523 "digest": "sha384", 00:16:40.523 "dhgroup": "ffdhe6144" 00:16:40.523 } 00:16:40.523 } 00:16:40.523 ]' 00:16:40.523 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.812 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.812 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.812 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:40.812 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.812 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.812 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.812 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.070 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:16:41.070 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:16:41.636 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.636 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:41.636 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.636 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.636 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.636 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.636 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.636 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:41.636 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:41.922 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:41.922 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.922 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:41.922 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:41.922 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:41.922 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.922 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.922 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.922 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.922 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.922 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.922 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.922 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.489 00:16:42.489 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.489 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.489 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.489 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.489 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.489 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.489 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.489 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.489 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.489 { 00:16:42.489 "cntlid": 89, 00:16:42.489 "qid": 0, 00:16:42.489 "state": "enabled", 00:16:42.489 "thread": "nvmf_tgt_poll_group_000", 00:16:42.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:42.489 "listen_address": { 00:16:42.489 "trtype": "TCP", 00:16:42.489 "adrfam": "IPv4", 00:16:42.489 "traddr": "10.0.0.3", 00:16:42.489 "trsvcid": "4420" 00:16:42.489 }, 00:16:42.489 "peer_address": { 00:16:42.489 "trtype": "TCP", 00:16:42.489 "adrfam": "IPv4", 00:16:42.489 "traddr": "10.0.0.1", 00:16:42.489 "trsvcid": "42180" 00:16:42.489 }, 00:16:42.489 "auth": { 00:16:42.489 "state": "completed", 00:16:42.489 "digest": "sha384", 00:16:42.489 "dhgroup": "ffdhe8192" 00:16:42.489 } 00:16:42.489 } 00:16:42.489 ]' 00:16:42.489 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.489 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.489 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.489 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:42.489 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.489 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.489 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.489 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.748 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:16:42.748 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:16:43.315 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.315 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:43.315 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.315 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.315 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.315 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.315 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:43.315 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:43.573 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:43.573 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.573 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:43.574 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:43.574 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:43.574 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.574 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.574 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.574 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.574 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.574 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.574 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.574 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.139 00:16:44.139 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.139 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.139 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.400 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.400 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.400 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.400 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.400 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.400 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.400 { 00:16:44.400 "cntlid": 91, 00:16:44.400 "qid": 0, 00:16:44.400 "state": "enabled", 00:16:44.400 "thread": "nvmf_tgt_poll_group_000", 00:16:44.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:44.400 "listen_address": { 00:16:44.400 "trtype": "TCP", 00:16:44.400 "adrfam": "IPv4", 00:16:44.400 "traddr": "10.0.0.3", 00:16:44.400 "trsvcid": "4420" 00:16:44.400 }, 00:16:44.400 "peer_address": { 00:16:44.400 "trtype": "TCP", 00:16:44.400 "adrfam": "IPv4", 00:16:44.400 "traddr": "10.0.0.1", 00:16:44.400 "trsvcid": "42210" 00:16:44.400 }, 00:16:44.400 "auth": { 00:16:44.400 "state": "completed", 00:16:44.400 "digest": "sha384", 00:16:44.400 "dhgroup": "ffdhe8192" 00:16:44.400 } 00:16:44.400 } 00:16:44.400 ]' 00:16:44.400 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.400 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.400 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.400 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:44.400 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.400 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.400 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.400 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.657 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:16:44.658 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:16:45.223 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.223 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:45.223 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.223 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.223 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.223 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.223 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:45.223 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:45.481 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:45.481 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.481 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:45.481 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:45.481 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:45.481 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.481 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.481 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.481 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.481 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.481 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.481 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.481 14:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.048 00:16:46.048 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.048 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.048 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.307 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.307 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.307 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.307 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.307 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.307 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.307 { 00:16:46.307 "cntlid": 93, 00:16:46.307 "qid": 0, 00:16:46.307 "state": "enabled", 00:16:46.307 "thread": "nvmf_tgt_poll_group_000", 00:16:46.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:46.307 "listen_address": { 00:16:46.307 "trtype": "TCP", 00:16:46.307 "adrfam": "IPv4", 00:16:46.307 "traddr": "10.0.0.3", 00:16:46.307 "trsvcid": "4420" 00:16:46.307 }, 00:16:46.307 "peer_address": { 00:16:46.307 "trtype": "TCP", 00:16:46.307 "adrfam": "IPv4", 00:16:46.307 "traddr": "10.0.0.1", 00:16:46.307 "trsvcid": "42228" 00:16:46.307 }, 00:16:46.307 "auth": { 00:16:46.307 "state": "completed", 00:16:46.307 "digest": "sha384", 00:16:46.307 "dhgroup": "ffdhe8192" 00:16:46.307 } 00:16:46.307 } 00:16:46.307 ]' 00:16:46.307 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.307 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.307 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.307 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:46.307 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.307 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.307 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.307 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.565 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:16:46.565 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:16:47.140 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.140 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:47.140 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.140 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.140 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.140 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.140 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:47.140 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:47.398 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:47.398 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.398 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:47.398 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:47.398 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:47.398 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.398 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:16:47.398 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.398 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.398 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.398 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:47.398 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.398 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.966 00:16:47.966 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.966 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.966 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.229 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.229 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.229 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.229 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.229 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.229 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.229 { 00:16:48.229 "cntlid": 95, 00:16:48.229 "qid": 0, 00:16:48.229 "state": "enabled", 00:16:48.229 "thread": "nvmf_tgt_poll_group_000", 00:16:48.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:48.229 "listen_address": { 00:16:48.229 "trtype": "TCP", 00:16:48.229 "adrfam": "IPv4", 00:16:48.229 "traddr": "10.0.0.3", 00:16:48.229 "trsvcid": "4420" 00:16:48.229 }, 00:16:48.229 "peer_address": { 00:16:48.229 "trtype": "TCP", 00:16:48.229 "adrfam": "IPv4", 00:16:48.229 "traddr": "10.0.0.1", 00:16:48.229 "trsvcid": "42244" 00:16:48.229 }, 00:16:48.229 "auth": { 00:16:48.229 "state": "completed", 00:16:48.229 "digest": "sha384", 00:16:48.229 "dhgroup": "ffdhe8192" 00:16:48.229 } 00:16:48.229 } 00:16:48.229 ]' 00:16:48.229 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.229 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.229 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.229 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:48.229 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.229 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.229 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.229 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.487 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:16:48.487 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:16:49.052 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.052 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:49.052 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.052 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.052 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.052 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:49.052 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.052 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.052 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:49.052 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:49.310 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:49.310 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.310 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.310 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:49.310 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:49.310 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.310 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.310 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.310 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.310 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.310 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.310 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.311 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.569 00:16:49.569 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.569 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.569 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.827 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.827 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.827 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.827 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.827 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.827 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.827 { 00:16:49.827 "cntlid": 97, 00:16:49.827 "qid": 0, 00:16:49.827 "state": "enabled", 00:16:49.827 "thread": "nvmf_tgt_poll_group_000", 00:16:49.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:49.827 "listen_address": { 00:16:49.827 "trtype": "TCP", 00:16:49.827 "adrfam": "IPv4", 00:16:49.827 "traddr": "10.0.0.3", 00:16:49.827 "trsvcid": "4420" 00:16:49.827 }, 00:16:49.827 "peer_address": { 00:16:49.827 "trtype": "TCP", 00:16:49.827 "adrfam": "IPv4", 00:16:49.827 "traddr": "10.0.0.1", 00:16:49.827 "trsvcid": "54570" 00:16:49.827 }, 00:16:49.827 "auth": { 00:16:49.827 "state": "completed", 00:16:49.827 "digest": "sha512", 00:16:49.827 "dhgroup": "null" 00:16:49.827 } 00:16:49.827 } 00:16:49.827 ]' 00:16:49.827 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.827 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.827 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.827 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:49.827 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.827 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.827 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.827 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.085 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:16:50.085 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:16:50.650 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.650 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:50.650 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.650 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.650 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.650 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.650 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:50.650 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:50.907 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:50.907 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.907 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.907 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:50.907 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:50.907 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.907 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.907 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.907 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.907 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.907 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.907 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.907 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.164 00:16:51.164 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.164 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.164 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.423 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.423 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.423 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.423 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.423 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.423 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.423 { 00:16:51.423 "cntlid": 99, 00:16:51.423 "qid": 0, 00:16:51.423 "state": "enabled", 00:16:51.423 "thread": "nvmf_tgt_poll_group_000", 00:16:51.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:51.423 "listen_address": { 00:16:51.423 "trtype": "TCP", 00:16:51.423 "adrfam": "IPv4", 00:16:51.423 "traddr": "10.0.0.3", 00:16:51.423 "trsvcid": "4420" 00:16:51.423 }, 00:16:51.423 "peer_address": { 00:16:51.423 "trtype": "TCP", 00:16:51.423 "adrfam": "IPv4", 00:16:51.423 "traddr": "10.0.0.1", 00:16:51.423 "trsvcid": "54598" 00:16:51.423 }, 00:16:51.423 "auth": { 00:16:51.423 "state": "completed", 00:16:51.423 "digest": "sha512", 00:16:51.423 "dhgroup": "null" 00:16:51.423 } 00:16:51.423 } 00:16:51.423 ]' 00:16:51.423 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.423 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.423 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.423 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:51.423 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.423 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.423 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.423 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.683 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:16:51.683 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:16:52.249 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.249 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:52.249 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.249 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.249 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.249 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.249 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:52.249 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:52.534 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:52.534 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.534 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.534 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:52.534 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:52.534 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.534 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.534 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.534 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.534 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.534 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.534 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.534 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.807 00:16:52.807 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.807 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.807 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.065 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.065 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.065 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.065 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.065 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.066 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.066 { 00:16:53.066 "cntlid": 101, 00:16:53.066 "qid": 0, 00:16:53.066 "state": "enabled", 00:16:53.066 "thread": "nvmf_tgt_poll_group_000", 00:16:53.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:53.066 "listen_address": { 00:16:53.066 "trtype": "TCP", 00:16:53.066 "adrfam": "IPv4", 00:16:53.066 "traddr": "10.0.0.3", 00:16:53.066 "trsvcid": "4420" 00:16:53.066 }, 00:16:53.066 "peer_address": { 00:16:53.066 "trtype": "TCP", 00:16:53.066 "adrfam": "IPv4", 00:16:53.066 "traddr": "10.0.0.1", 00:16:53.066 "trsvcid": "54608" 00:16:53.066 }, 00:16:53.066 "auth": { 00:16:53.066 "state": "completed", 00:16:53.066 "digest": "sha512", 00:16:53.066 "dhgroup": "null" 00:16:53.066 } 00:16:53.066 } 00:16:53.066 ]' 00:16:53.066 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.066 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.066 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.066 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:53.066 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.066 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.066 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.066 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.323 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:16:53.323 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:16:53.888 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.888 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:53.888 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.888 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.888 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.888 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.888 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:53.888 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:54.146 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:54.146 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.146 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.146 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:54.146 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:54.146 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.146 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:16:54.146 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.146 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.146 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.146 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:54.146 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.146 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.406 00:16:54.406 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.406 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.406 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.664 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.664 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.664 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.664 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.664 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.664 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.664 { 00:16:54.664 "cntlid": 103, 00:16:54.664 "qid": 0, 00:16:54.664 "state": "enabled", 00:16:54.664 "thread": "nvmf_tgt_poll_group_000", 00:16:54.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:54.664 "listen_address": { 00:16:54.664 "trtype": "TCP", 00:16:54.664 "adrfam": "IPv4", 00:16:54.664 "traddr": "10.0.0.3", 00:16:54.664 "trsvcid": "4420" 00:16:54.664 }, 00:16:54.664 "peer_address": { 00:16:54.664 "trtype": "TCP", 00:16:54.664 "adrfam": "IPv4", 00:16:54.664 "traddr": "10.0.0.1", 00:16:54.664 "trsvcid": "54642" 00:16:54.664 }, 00:16:54.664 "auth": { 00:16:54.664 "state": "completed", 00:16:54.664 "digest": "sha512", 00:16:54.664 "dhgroup": "null" 00:16:54.664 } 00:16:54.664 } 00:16:54.664 ]' 00:16:54.664 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.664 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.664 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.664 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:54.664 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.664 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.664 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.664 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.922 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:16:54.922 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:16:55.488 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.488 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:55.488 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.488 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.488 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.488 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.488 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.488 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:55.488 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:55.746 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:55.746 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.746 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.746 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:55.746 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.746 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.746 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.746 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.746 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.746 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.746 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.746 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.746 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.004 00:16:56.004 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.004 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.004 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.262 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.262 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.262 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.262 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.262 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.262 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.262 { 00:16:56.262 "cntlid": 105, 00:16:56.262 "qid": 0, 00:16:56.262 "state": "enabled", 00:16:56.262 "thread": "nvmf_tgt_poll_group_000", 00:16:56.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:56.262 "listen_address": { 00:16:56.262 "trtype": "TCP", 00:16:56.262 "adrfam": "IPv4", 00:16:56.262 "traddr": "10.0.0.3", 00:16:56.262 "trsvcid": "4420" 00:16:56.262 }, 00:16:56.262 "peer_address": { 00:16:56.262 "trtype": "TCP", 00:16:56.262 "adrfam": "IPv4", 00:16:56.262 "traddr": "10.0.0.1", 00:16:56.262 "trsvcid": "54666" 00:16:56.262 }, 00:16:56.262 "auth": { 00:16:56.262 "state": "completed", 00:16:56.262 "digest": "sha512", 00:16:56.262 "dhgroup": "ffdhe2048" 00:16:56.262 } 00:16:56.262 } 00:16:56.262 ]' 00:16:56.262 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.262 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.262 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.262 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:56.262 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.262 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.262 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.262 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.519 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:16:56.519 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:16:57.086 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.086 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:57.086 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.086 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.086 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.086 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.086 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:57.086 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:57.344 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:57.344 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.344 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.344 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:57.344 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:57.344 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.344 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.344 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.344 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.344 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.344 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.344 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.344 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.602 00:16:57.602 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.602 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.602 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.859 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.860 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.860 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.860 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.860 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.860 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.860 { 00:16:57.860 "cntlid": 107, 00:16:57.860 "qid": 0, 00:16:57.860 "state": "enabled", 00:16:57.860 "thread": "nvmf_tgt_poll_group_000", 00:16:57.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:57.860 "listen_address": { 00:16:57.860 "trtype": "TCP", 00:16:57.860 "adrfam": "IPv4", 00:16:57.860 "traddr": "10.0.0.3", 00:16:57.860 "trsvcid": "4420" 00:16:57.860 }, 00:16:57.860 "peer_address": { 00:16:57.860 "trtype": "TCP", 00:16:57.860 "adrfam": "IPv4", 00:16:57.860 "traddr": "10.0.0.1", 00:16:57.860 "trsvcid": "54690" 00:16:57.860 }, 00:16:57.860 "auth": { 00:16:57.860 "state": "completed", 00:16:57.860 "digest": "sha512", 00:16:57.860 "dhgroup": "ffdhe2048" 00:16:57.860 } 00:16:57.860 } 00:16:57.860 ]' 00:16:57.860 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.860 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.860 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.117 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:58.117 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.117 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.117 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.117 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.117 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:16:58.117 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:16:59.051 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.051 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:16:59.051 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.051 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.051 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.051 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.051 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:59.051 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:59.051 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:59.051 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.051 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.051 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:59.051 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:59.051 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.051 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.051 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.051 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.051 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.051 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.051 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.051 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.309 00:16:59.566 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.566 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.566 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.566 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.566 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.566 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.566 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.566 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.566 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.566 { 00:16:59.566 "cntlid": 109, 00:16:59.567 "qid": 0, 00:16:59.567 "state": "enabled", 00:16:59.567 "thread": "nvmf_tgt_poll_group_000", 00:16:59.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:16:59.567 "listen_address": { 00:16:59.567 "trtype": "TCP", 00:16:59.567 "adrfam": "IPv4", 00:16:59.567 "traddr": "10.0.0.3", 00:16:59.567 "trsvcid": "4420" 00:16:59.567 }, 00:16:59.567 "peer_address": { 00:16:59.567 "trtype": "TCP", 00:16:59.567 "adrfam": "IPv4", 00:16:59.567 "traddr": "10.0.0.1", 00:16:59.567 "trsvcid": "44566" 00:16:59.567 }, 00:16:59.567 "auth": { 00:16:59.567 "state": "completed", 00:16:59.567 "digest": "sha512", 00:16:59.567 "dhgroup": "ffdhe2048" 00:16:59.567 } 00:16:59.567 } 00:16:59.567 ]' 00:16:59.567 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.567 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.567 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.825 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:59.825 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.825 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.825 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.825 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.083 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:17:00.083 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:17:00.653 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.654 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:00.654 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.654 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.654 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.654 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.654 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:00.654 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:00.915 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:00.915 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.915 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.915 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:00.915 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:00.915 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.915 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:17:00.915 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.915 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.915 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.915 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.915 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.915 14:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.172 00:17:01.173 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.173 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.173 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.430 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.430 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.430 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.430 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.430 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.430 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.430 { 00:17:01.430 "cntlid": 111, 00:17:01.430 "qid": 0, 00:17:01.430 "state": "enabled", 00:17:01.430 "thread": "nvmf_tgt_poll_group_000", 00:17:01.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:01.430 "listen_address": { 00:17:01.430 "trtype": "TCP", 00:17:01.430 "adrfam": "IPv4", 00:17:01.430 "traddr": "10.0.0.3", 00:17:01.430 "trsvcid": "4420" 00:17:01.430 }, 00:17:01.430 "peer_address": { 00:17:01.430 "trtype": "TCP", 00:17:01.430 "adrfam": "IPv4", 00:17:01.430 "traddr": "10.0.0.1", 00:17:01.430 "trsvcid": "44586" 00:17:01.430 }, 00:17:01.430 "auth": { 00:17:01.430 "state": "completed", 00:17:01.430 "digest": "sha512", 00:17:01.430 "dhgroup": "ffdhe2048" 00:17:01.430 } 00:17:01.430 } 00:17:01.430 ]' 00:17:01.430 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.430 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.430 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.430 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:01.430 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.430 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.430 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.430 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.687 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:17:01.687 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:17:02.253 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.253 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:02.253 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.253 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.253 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.253 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.253 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.253 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.253 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.510 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:02.510 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.510 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.510 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:02.510 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:02.510 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.511 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.511 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.511 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.511 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.511 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.511 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.511 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.768 00:17:02.768 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.768 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.769 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.026 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.026 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.026 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.026 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.026 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.026 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.026 { 00:17:03.026 "cntlid": 113, 00:17:03.026 "qid": 0, 00:17:03.026 "state": "enabled", 00:17:03.026 "thread": "nvmf_tgt_poll_group_000", 00:17:03.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:03.026 "listen_address": { 00:17:03.026 "trtype": "TCP", 00:17:03.026 "adrfam": "IPv4", 00:17:03.026 "traddr": "10.0.0.3", 00:17:03.026 "trsvcid": "4420" 00:17:03.026 }, 00:17:03.026 "peer_address": { 00:17:03.026 "trtype": "TCP", 00:17:03.026 "adrfam": "IPv4", 00:17:03.026 "traddr": "10.0.0.1", 00:17:03.026 "trsvcid": "44596" 00:17:03.026 }, 00:17:03.026 "auth": { 00:17:03.026 "state": "completed", 00:17:03.026 "digest": "sha512", 00:17:03.026 "dhgroup": "ffdhe3072" 00:17:03.026 } 00:17:03.026 } 00:17:03.026 ]' 00:17:03.026 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.026 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.026 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.026 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:03.026 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.283 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.283 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.283 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.283 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:17:03.283 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.241 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.498 00:17:04.498 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.498 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.498 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.757 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.757 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.757 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.757 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.757 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.757 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.757 { 00:17:04.757 "cntlid": 115, 00:17:04.757 "qid": 0, 00:17:04.757 "state": "enabled", 00:17:04.757 "thread": "nvmf_tgt_poll_group_000", 00:17:04.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:04.757 "listen_address": { 00:17:04.757 "trtype": "TCP", 00:17:04.757 "adrfam": "IPv4", 00:17:04.757 "traddr": "10.0.0.3", 00:17:04.757 "trsvcid": "4420" 00:17:04.757 }, 00:17:04.757 "peer_address": { 00:17:04.757 "trtype": "TCP", 00:17:04.757 "adrfam": "IPv4", 00:17:04.757 "traddr": "10.0.0.1", 00:17:04.757 "trsvcid": "44620" 00:17:04.757 }, 00:17:04.757 "auth": { 00:17:04.757 "state": "completed", 00:17:04.757 "digest": "sha512", 00:17:04.757 "dhgroup": "ffdhe3072" 00:17:04.757 } 00:17:04.757 } 00:17:04.757 ]' 00:17:04.757 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.757 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.757 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.757 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.757 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.757 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.757 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.757 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.015 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:17:05.015 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:17:05.581 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.581 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:05.581 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.581 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.581 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.581 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.581 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:05.581 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:05.839 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:05.839 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.839 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.839 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:05.839 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:05.839 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.839 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.839 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.839 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.839 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.839 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.839 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.839 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.097 00:17:06.097 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.097 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.097 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.356 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.356 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.356 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.356 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.356 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.356 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.356 { 00:17:06.356 "cntlid": 117, 00:17:06.356 "qid": 0, 00:17:06.356 "state": "enabled", 00:17:06.356 "thread": "nvmf_tgt_poll_group_000", 00:17:06.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:06.356 "listen_address": { 00:17:06.356 "trtype": "TCP", 00:17:06.356 "adrfam": "IPv4", 00:17:06.356 "traddr": "10.0.0.3", 00:17:06.356 "trsvcid": "4420" 00:17:06.356 }, 00:17:06.356 "peer_address": { 00:17:06.356 "trtype": "TCP", 00:17:06.356 "adrfam": "IPv4", 00:17:06.356 "traddr": "10.0.0.1", 00:17:06.356 "trsvcid": "44640" 00:17:06.356 }, 00:17:06.356 "auth": { 00:17:06.356 "state": "completed", 00:17:06.356 "digest": "sha512", 00:17:06.356 "dhgroup": "ffdhe3072" 00:17:06.356 } 00:17:06.356 } 00:17:06.356 ]' 00:17:06.356 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.356 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.356 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.356 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:06.356 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.615 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.615 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.615 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.615 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:17:06.615 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:17:07.179 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.438 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.697 00:17:07.955 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.955 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.955 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.955 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.955 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.955 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.955 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.955 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.955 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.955 { 00:17:07.955 "cntlid": 119, 00:17:07.955 "qid": 0, 00:17:07.955 "state": "enabled", 00:17:07.955 "thread": "nvmf_tgt_poll_group_000", 00:17:07.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:07.955 "listen_address": { 00:17:07.955 "trtype": "TCP", 00:17:07.955 "adrfam": "IPv4", 00:17:07.955 "traddr": "10.0.0.3", 00:17:07.955 "trsvcid": "4420" 00:17:07.955 }, 00:17:07.955 "peer_address": { 00:17:07.955 "trtype": "TCP", 00:17:07.955 "adrfam": "IPv4", 00:17:07.955 "traddr": "10.0.0.1", 00:17:07.955 "trsvcid": "44664" 00:17:07.955 }, 00:17:07.955 "auth": { 00:17:07.955 "state": "completed", 00:17:07.955 "digest": "sha512", 00:17:07.955 "dhgroup": "ffdhe3072" 00:17:07.955 } 00:17:07.955 } 00:17:07.955 ]' 00:17:07.955 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.955 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.955 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.212 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:08.212 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.212 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.212 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.212 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.470 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:17:08.470 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:17:09.035 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.035 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:09.035 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.035 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.035 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.035 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.035 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.035 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:09.035 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:09.035 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:09.035 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.035 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.035 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:09.035 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:09.035 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.035 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.035 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.035 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.035 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.035 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.035 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.036 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.600 00:17:09.600 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.600 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.600 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.600 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.600 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.600 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.600 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.600 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.600 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.600 { 00:17:09.600 "cntlid": 121, 00:17:09.600 "qid": 0, 00:17:09.600 "state": "enabled", 00:17:09.600 "thread": "nvmf_tgt_poll_group_000", 00:17:09.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:09.600 "listen_address": { 00:17:09.600 "trtype": "TCP", 00:17:09.600 "adrfam": "IPv4", 00:17:09.600 "traddr": "10.0.0.3", 00:17:09.600 "trsvcid": "4420" 00:17:09.600 }, 00:17:09.600 "peer_address": { 00:17:09.600 "trtype": "TCP", 00:17:09.600 "adrfam": "IPv4", 00:17:09.600 "traddr": "10.0.0.1", 00:17:09.600 "trsvcid": "58012" 00:17:09.600 }, 00:17:09.600 "auth": { 00:17:09.600 "state": "completed", 00:17:09.600 "digest": "sha512", 00:17:09.600 "dhgroup": "ffdhe4096" 00:17:09.600 } 00:17:09.600 } 00:17:09.600 ]' 00:17:09.600 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.876 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.876 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.876 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:09.876 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.876 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.876 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.876 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.134 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:17:10.134 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:17:10.708 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.708 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:10.708 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.708 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.708 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.708 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.708 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:10.708 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:10.708 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:10.708 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.708 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:10.709 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:10.709 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:10.709 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.709 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.709 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.709 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.709 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.709 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.709 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.709 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.275 00:17:11.275 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.275 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.275 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.275 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.275 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.275 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.275 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.275 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.275 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.275 { 00:17:11.275 "cntlid": 123, 00:17:11.275 "qid": 0, 00:17:11.275 "state": "enabled", 00:17:11.275 "thread": "nvmf_tgt_poll_group_000", 00:17:11.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:11.275 "listen_address": { 00:17:11.275 "trtype": "TCP", 00:17:11.275 "adrfam": "IPv4", 00:17:11.275 "traddr": "10.0.0.3", 00:17:11.275 "trsvcid": "4420" 00:17:11.275 }, 00:17:11.275 "peer_address": { 00:17:11.275 "trtype": "TCP", 00:17:11.275 "adrfam": "IPv4", 00:17:11.275 "traddr": "10.0.0.1", 00:17:11.275 "trsvcid": "58048" 00:17:11.275 }, 00:17:11.275 "auth": { 00:17:11.275 "state": "completed", 00:17:11.275 "digest": "sha512", 00:17:11.275 "dhgroup": "ffdhe4096" 00:17:11.275 } 00:17:11.275 } 00:17:11.275 ]' 00:17:11.275 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.275 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.275 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.275 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:11.557 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.557 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.557 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.557 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.557 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:17:11.557 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:17:12.121 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.121 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:12.121 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.121 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.121 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.121 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.121 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:12.121 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:12.379 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:12.379 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.379 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:12.379 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:12.379 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:12.379 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.379 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.379 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.379 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.379 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.379 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.379 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.379 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.636 00:17:12.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.893 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.893 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.893 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.893 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.893 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.893 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.893 { 00:17:12.893 "cntlid": 125, 00:17:12.893 "qid": 0, 00:17:12.893 "state": "enabled", 00:17:12.893 "thread": "nvmf_tgt_poll_group_000", 00:17:12.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:12.893 "listen_address": { 00:17:12.893 "trtype": "TCP", 00:17:12.893 "adrfam": "IPv4", 00:17:12.893 "traddr": "10.0.0.3", 00:17:12.893 "trsvcid": "4420" 00:17:12.893 }, 00:17:12.893 "peer_address": { 00:17:12.893 "trtype": "TCP", 00:17:12.893 "adrfam": "IPv4", 00:17:12.893 "traddr": "10.0.0.1", 00:17:12.893 "trsvcid": "58076" 00:17:12.893 }, 00:17:12.893 "auth": { 00:17:12.893 "state": "completed", 00:17:12.893 "digest": "sha512", 00:17:12.893 "dhgroup": "ffdhe4096" 00:17:12.893 } 00:17:12.893 } 00:17:12.893 ]' 00:17:12.893 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.893 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.893 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.150 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:13.150 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.150 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.150 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.150 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.151 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:17:13.151 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:17:14.084 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.084 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:14.084 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.084 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.084 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.084 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.084 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:14.084 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:14.084 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:14.084 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.084 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:14.084 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:14.084 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:14.084 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.084 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:17:14.084 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.084 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.084 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.084 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:14.084 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.084 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.341 00:17:14.341 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.341 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.341 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.599 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.599 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.599 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.599 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.599 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.599 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.599 { 00:17:14.599 "cntlid": 127, 00:17:14.599 "qid": 0, 00:17:14.599 "state": "enabled", 00:17:14.599 "thread": "nvmf_tgt_poll_group_000", 00:17:14.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:14.599 "listen_address": { 00:17:14.599 "trtype": "TCP", 00:17:14.599 "adrfam": "IPv4", 00:17:14.599 "traddr": "10.0.0.3", 00:17:14.599 "trsvcid": "4420" 00:17:14.599 }, 00:17:14.599 "peer_address": { 00:17:14.599 "trtype": "TCP", 00:17:14.599 "adrfam": "IPv4", 00:17:14.599 "traddr": "10.0.0.1", 00:17:14.599 "trsvcid": "58092" 00:17:14.599 }, 00:17:14.599 "auth": { 00:17:14.599 "state": "completed", 00:17:14.599 "digest": "sha512", 00:17:14.599 "dhgroup": "ffdhe4096" 00:17:14.599 } 00:17:14.599 } 00:17:14.599 ]' 00:17:14.599 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.599 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.599 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.599 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:14.599 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.599 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.599 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.599 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.856 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:17:14.856 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:17:15.421 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.421 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:15.421 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.421 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.421 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.421 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.421 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.421 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:15.421 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:15.678 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:15.678 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.678 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:15.678 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:15.678 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:15.678 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.678 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.678 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.678 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.678 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.678 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.678 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.678 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.242 00:17:16.242 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.242 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.242 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.242 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.242 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.242 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.242 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.242 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.242 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.242 { 00:17:16.242 "cntlid": 129, 00:17:16.242 "qid": 0, 00:17:16.242 "state": "enabled", 00:17:16.242 "thread": "nvmf_tgt_poll_group_000", 00:17:16.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:16.242 "listen_address": { 00:17:16.242 "trtype": "TCP", 00:17:16.242 "adrfam": "IPv4", 00:17:16.242 "traddr": "10.0.0.3", 00:17:16.242 "trsvcid": "4420" 00:17:16.242 }, 00:17:16.242 "peer_address": { 00:17:16.242 "trtype": "TCP", 00:17:16.242 "adrfam": "IPv4", 00:17:16.242 "traddr": "10.0.0.1", 00:17:16.242 "trsvcid": "58110" 00:17:16.242 }, 00:17:16.242 "auth": { 00:17:16.242 "state": "completed", 00:17:16.242 "digest": "sha512", 00:17:16.242 "dhgroup": "ffdhe6144" 00:17:16.242 } 00:17:16.242 } 00:17:16.242 ]' 00:17:16.242 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.242 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.242 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.500 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:16.500 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.500 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.500 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.500 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.500 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:17:16.500 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.432 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.689 00:17:17.689 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.689 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.689 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.947 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.947 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.947 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.947 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.947 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.947 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.947 { 00:17:17.947 "cntlid": 131, 00:17:17.947 "qid": 0, 00:17:17.947 "state": "enabled", 00:17:17.947 "thread": "nvmf_tgt_poll_group_000", 00:17:17.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:17.947 "listen_address": { 00:17:17.947 "trtype": "TCP", 00:17:17.947 "adrfam": "IPv4", 00:17:17.947 "traddr": "10.0.0.3", 00:17:17.947 "trsvcid": "4420" 00:17:17.947 }, 00:17:17.947 "peer_address": { 00:17:17.947 "trtype": "TCP", 00:17:17.947 "adrfam": "IPv4", 00:17:17.947 "traddr": "10.0.0.1", 00:17:17.947 "trsvcid": "58142" 00:17:17.947 }, 00:17:17.947 "auth": { 00:17:17.947 "state": "completed", 00:17:17.947 "digest": "sha512", 00:17:17.947 "dhgroup": "ffdhe6144" 00:17:17.947 } 00:17:17.947 } 00:17:17.947 ]' 00:17:17.947 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.204 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.204 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.204 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.204 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.204 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.204 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.204 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.461 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:17:18.461 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:17:19.026 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.026 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:19.026 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.026 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.026 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.026 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.026 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.026 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.284 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:19.284 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.284 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:19.284 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:19.284 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:19.284 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.284 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.284 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.284 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.284 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.284 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.284 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.284 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.541 00:17:19.541 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.541 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.541 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.798 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.798 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.798 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.798 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.798 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.798 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.798 { 00:17:19.799 "cntlid": 133, 00:17:19.799 "qid": 0, 00:17:19.799 "state": "enabled", 00:17:19.799 "thread": "nvmf_tgt_poll_group_000", 00:17:19.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:19.799 "listen_address": { 00:17:19.799 "trtype": "TCP", 00:17:19.799 "adrfam": "IPv4", 00:17:19.799 "traddr": "10.0.0.3", 00:17:19.799 "trsvcid": "4420" 00:17:19.799 }, 00:17:19.799 "peer_address": { 00:17:19.799 "trtype": "TCP", 00:17:19.799 "adrfam": "IPv4", 00:17:19.799 "traddr": "10.0.0.1", 00:17:19.799 "trsvcid": "50498" 00:17:19.799 }, 00:17:19.799 "auth": { 00:17:19.799 "state": "completed", 00:17:19.799 "digest": "sha512", 00:17:19.799 "dhgroup": "ffdhe6144" 00:17:19.799 } 00:17:19.799 } 00:17:19.799 ]' 00:17:19.799 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.799 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.799 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.799 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:19.799 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.799 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.799 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.799 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.056 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:17:20.056 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:17:20.622 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.622 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:20.622 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.622 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.622 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.622 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.622 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:20.622 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:20.880 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:20.880 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.880 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:20.880 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:20.880 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:20.880 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.880 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:17:20.880 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.880 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.880 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.880 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.880 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.880 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.445 00:17:21.445 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.445 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.445 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.445 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.445 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.445 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.445 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.445 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.445 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.445 { 00:17:21.445 "cntlid": 135, 00:17:21.445 "qid": 0, 00:17:21.445 "state": "enabled", 00:17:21.445 "thread": "nvmf_tgt_poll_group_000", 00:17:21.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:21.445 "listen_address": { 00:17:21.445 "trtype": "TCP", 00:17:21.445 "adrfam": "IPv4", 00:17:21.445 "traddr": "10.0.0.3", 00:17:21.445 "trsvcid": "4420" 00:17:21.445 }, 00:17:21.445 "peer_address": { 00:17:21.445 "trtype": "TCP", 00:17:21.445 "adrfam": "IPv4", 00:17:21.445 "traddr": "10.0.0.1", 00:17:21.445 "trsvcid": "50526" 00:17:21.445 }, 00:17:21.445 "auth": { 00:17:21.445 "state": "completed", 00:17:21.445 "digest": "sha512", 00:17:21.445 "dhgroup": "ffdhe6144" 00:17:21.445 } 00:17:21.445 } 00:17:21.445 ]' 00:17:21.445 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.445 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.445 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.445 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:21.445 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.445 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.445 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.445 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.704 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:17:21.704 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:17:22.268 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.268 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:22.268 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.268 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.268 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.268 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.268 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.268 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.268 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.534 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:22.534 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.534 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.534 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:22.534 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:22.534 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.534 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.534 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.534 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.534 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.534 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.534 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.534 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.141 00:17:23.141 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.141 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.141 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.141 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.141 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.141 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.141 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.141 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.141 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.141 { 00:17:23.141 "cntlid": 137, 00:17:23.141 "qid": 0, 00:17:23.141 "state": "enabled", 00:17:23.141 "thread": "nvmf_tgt_poll_group_000", 00:17:23.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:23.141 "listen_address": { 00:17:23.141 "trtype": "TCP", 00:17:23.141 "adrfam": "IPv4", 00:17:23.141 "traddr": "10.0.0.3", 00:17:23.141 "trsvcid": "4420" 00:17:23.141 }, 00:17:23.141 "peer_address": { 00:17:23.141 "trtype": "TCP", 00:17:23.141 "adrfam": "IPv4", 00:17:23.141 "traddr": "10.0.0.1", 00:17:23.141 "trsvcid": "50554" 00:17:23.142 }, 00:17:23.142 "auth": { 00:17:23.142 "state": "completed", 00:17:23.142 "digest": "sha512", 00:17:23.142 "dhgroup": "ffdhe8192" 00:17:23.142 } 00:17:23.142 } 00:17:23.142 ]' 00:17:23.142 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.142 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.142 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.400 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.400 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.400 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.400 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.400 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.400 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:17:23.400 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:17:23.967 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.967 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:23.967 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.967 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.236 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.236 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.236 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.236 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.236 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:24.236 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.236 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:24.236 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:24.236 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:24.236 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.236 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.236 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.236 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.236 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.236 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.236 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.236 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.803 00:17:24.803 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.803 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.803 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.063 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.063 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.063 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.063 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.063 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.063 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.063 { 00:17:25.063 "cntlid": 139, 00:17:25.063 "qid": 0, 00:17:25.063 "state": "enabled", 00:17:25.063 "thread": "nvmf_tgt_poll_group_000", 00:17:25.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:25.063 "listen_address": { 00:17:25.063 "trtype": "TCP", 00:17:25.063 "adrfam": "IPv4", 00:17:25.063 "traddr": "10.0.0.3", 00:17:25.063 "trsvcid": "4420" 00:17:25.063 }, 00:17:25.063 "peer_address": { 00:17:25.063 "trtype": "TCP", 00:17:25.063 "adrfam": "IPv4", 00:17:25.063 "traddr": "10.0.0.1", 00:17:25.063 "trsvcid": "50576" 00:17:25.063 }, 00:17:25.063 "auth": { 00:17:25.063 "state": "completed", 00:17:25.063 "digest": "sha512", 00:17:25.063 "dhgroup": "ffdhe8192" 00:17:25.063 } 00:17:25.063 } 00:17:25.063 ]' 00:17:25.063 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.063 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.063 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.063 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:25.063 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.063 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.063 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.063 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.349 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:17:25.349 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: --dhchap-ctrl-secret DHHC-1:02:MmJjZDUwYmZjODg0ZjQxYjY0NmIzOWVlNTVkNGYyOTRhYWJiNTNiZWFkNTkwYThhZwc2QA==: 00:17:25.913 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.913 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:25.914 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.914 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.914 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.914 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.914 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:25.914 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:25.914 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:25.914 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.914 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:25.914 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:25.914 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:25.914 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.914 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.914 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.914 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.171 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.171 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.171 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.171 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.736 00:17:26.736 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.736 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.736 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.736 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.736 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.736 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.736 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.736 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.736 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.736 { 00:17:26.736 "cntlid": 141, 00:17:26.736 "qid": 0, 00:17:26.736 "state": "enabled", 00:17:26.736 "thread": "nvmf_tgt_poll_group_000", 00:17:26.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:26.736 "listen_address": { 00:17:26.736 "trtype": "TCP", 00:17:26.736 "adrfam": "IPv4", 00:17:26.736 "traddr": "10.0.0.3", 00:17:26.736 "trsvcid": "4420" 00:17:26.736 }, 00:17:26.736 "peer_address": { 00:17:26.736 "trtype": "TCP", 00:17:26.736 "adrfam": "IPv4", 00:17:26.736 "traddr": "10.0.0.1", 00:17:26.736 "trsvcid": "50596" 00:17:26.736 }, 00:17:26.736 "auth": { 00:17:26.736 "state": "completed", 00:17:26.736 "digest": "sha512", 00:17:26.736 "dhgroup": "ffdhe8192" 00:17:26.736 } 00:17:26.736 } 00:17:26.736 ]' 00:17:26.736 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.736 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.994 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.994 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:26.994 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.994 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.994 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.994 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.994 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:17:26.994 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:01:M2NkZmI4YjJkOTRhOWJlZTJmYjlmZjUyMWMyNGIwOWR8Quyv: 00:17:27.560 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.819 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:27.819 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.819 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.819 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.819 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.819 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:27.819 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:27.819 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:27.819 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.819 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.819 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:27.819 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:27.819 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.819 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:17:27.819 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.819 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.819 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.819 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:27.819 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.820 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.386 00:17:28.386 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.386 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.386 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.645 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.645 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.645 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.645 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.645 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.645 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.645 { 00:17:28.645 "cntlid": 143, 00:17:28.645 "qid": 0, 00:17:28.645 "state": "enabled", 00:17:28.645 "thread": "nvmf_tgt_poll_group_000", 00:17:28.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:28.645 "listen_address": { 00:17:28.645 "trtype": "TCP", 00:17:28.645 "adrfam": "IPv4", 00:17:28.645 "traddr": "10.0.0.3", 00:17:28.645 "trsvcid": "4420" 00:17:28.645 }, 00:17:28.645 "peer_address": { 00:17:28.645 "trtype": "TCP", 00:17:28.645 "adrfam": "IPv4", 00:17:28.645 "traddr": "10.0.0.1", 00:17:28.645 "trsvcid": "50624" 00:17:28.645 }, 00:17:28.645 "auth": { 00:17:28.645 "state": "completed", 00:17:28.645 "digest": "sha512", 00:17:28.645 "dhgroup": "ffdhe8192" 00:17:28.645 } 00:17:28.645 } 00:17:28.645 ]' 00:17:28.645 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.645 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.645 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.909 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.909 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.909 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.909 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.909 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.909 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:17:28.909 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.843 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.409 00:17:30.409 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.409 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.410 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.667 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.667 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.667 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.667 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.667 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.667 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.667 { 00:17:30.667 "cntlid": 145, 00:17:30.667 "qid": 0, 00:17:30.667 "state": "enabled", 00:17:30.667 "thread": "nvmf_tgt_poll_group_000", 00:17:30.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:30.667 "listen_address": { 00:17:30.667 "trtype": "TCP", 00:17:30.667 "adrfam": "IPv4", 00:17:30.667 "traddr": "10.0.0.3", 00:17:30.667 "trsvcid": "4420" 00:17:30.667 }, 00:17:30.667 "peer_address": { 00:17:30.667 "trtype": "TCP", 00:17:30.667 "adrfam": "IPv4", 00:17:30.667 "traddr": "10.0.0.1", 00:17:30.667 "trsvcid": "51706" 00:17:30.667 }, 00:17:30.667 "auth": { 00:17:30.667 "state": "completed", 00:17:30.667 "digest": "sha512", 00:17:30.667 "dhgroup": "ffdhe8192" 00:17:30.667 } 00:17:30.667 } 00:17:30.667 ]' 00:17:30.667 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.667 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.667 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.667 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.667 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.667 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.667 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.667 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.925 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:17:30.925 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:00:ZThlNTcyMzBiNTQ2MzczMGQzZGJjNzE5OTRhZTBiMDU0NGU0ZDlhNWVmMWYxZTE363vqEw==: --dhchap-ctrl-secret DHHC-1:03:OTRlMDZkN2U4YWMxNGNiNTFkNzk3N2EzNDc3ODJhYWQ4MGYyYjFjZTI1NmViOWIwYjk0NTUzNmRhMzI2NjVkNWgHSPc=: 00:17:31.490 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.490 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:31.490 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.490 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.490 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.490 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 00:17:31.490 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.490 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.490 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.490 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:31.490 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:31.490 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:31.490 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:31.490 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.490 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:31.490 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.490 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:31.490 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:31.490 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:32.067 request: 00:17:32.067 { 00:17:32.067 "name": "nvme0", 00:17:32.067 "trtype": "tcp", 00:17:32.067 "traddr": "10.0.0.3", 00:17:32.067 "adrfam": "ipv4", 00:17:32.067 "trsvcid": "4420", 00:17:32.067 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:32.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:32.067 "prchk_reftag": false, 00:17:32.067 "prchk_guard": false, 00:17:32.067 "hdgst": false, 00:17:32.067 "ddgst": false, 00:17:32.067 "dhchap_key": "key2", 00:17:32.067 "allow_unrecognized_csi": false, 00:17:32.067 "method": "bdev_nvme_attach_controller", 00:17:32.067 "req_id": 1 00:17:32.067 } 00:17:32.067 Got JSON-RPC error response 00:17:32.067 response: 00:17:32.067 { 00:17:32.067 "code": -5, 00:17:32.067 "message": "Input/output error" 00:17:32.067 } 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:32.067 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:32.633 request: 00:17:32.633 { 00:17:32.633 "name": "nvme0", 00:17:32.633 "trtype": "tcp", 00:17:32.633 "traddr": "10.0.0.3", 00:17:32.633 "adrfam": "ipv4", 00:17:32.633 "trsvcid": "4420", 00:17:32.633 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:32.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:32.633 "prchk_reftag": false, 00:17:32.633 "prchk_guard": false, 00:17:32.633 "hdgst": false, 00:17:32.633 "ddgst": false, 00:17:32.633 "dhchap_key": "key1", 00:17:32.633 "dhchap_ctrlr_key": "ckey2", 00:17:32.633 "allow_unrecognized_csi": false, 00:17:32.633 "method": "bdev_nvme_attach_controller", 00:17:32.633 "req_id": 1 00:17:32.633 } 00:17:32.633 Got JSON-RPC error response 00:17:32.633 response: 00:17:32.633 { 00:17:32.633 "code": -5, 00:17:32.633 "message": "Input/output error" 00:17:32.633 } 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.633 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.199 request: 00:17:33.199 { 00:17:33.199 "name": "nvme0", 00:17:33.199 "trtype": "tcp", 00:17:33.199 "traddr": "10.0.0.3", 00:17:33.199 "adrfam": "ipv4", 00:17:33.199 "trsvcid": "4420", 00:17:33.199 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:33.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:33.199 "prchk_reftag": false, 00:17:33.199 "prchk_guard": false, 00:17:33.199 "hdgst": false, 00:17:33.199 "ddgst": false, 00:17:33.199 "dhchap_key": "key1", 00:17:33.199 "dhchap_ctrlr_key": "ckey1", 00:17:33.199 "allow_unrecognized_csi": false, 00:17:33.199 "method": "bdev_nvme_attach_controller", 00:17:33.199 "req_id": 1 00:17:33.199 } 00:17:33.199 Got JSON-RPC error response 00:17:33.199 response: 00:17:33.199 { 00:17:33.199 "code": -5, 00:17:33.199 "message": "Input/output error" 00:17:33.199 } 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 66206 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 66206 ']' 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 66206 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66206 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:33.199 killing process with pid 66206 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66206' 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 66206 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 66206 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=68973 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 68973 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 68973 ']' 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.199 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:33.200 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.132 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:34.132 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:34.132 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:34.132 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:34.132 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.132 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.132 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:34.132 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 68973 00:17:34.132 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 68973 ']' 00:17:34.132 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.132 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:34.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.132 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.132 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:34.132 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.390 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:34.390 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:34.390 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:34.390 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.390 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.390 null0 00:17:34.390 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.390 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:34.390 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.En4 00:17:34.390 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.390 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.fDr ]] 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fDr 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Voq 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.eU2 ]] 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eU2 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.I9Q 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.EiI ]] 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EiI 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.rRN 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.649 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.581 nvme0n1 00:17:35.582 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.582 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.582 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.582 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.582 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.582 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.582 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.582 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.582 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.582 { 00:17:35.582 "cntlid": 1, 00:17:35.582 "qid": 0, 00:17:35.582 "state": "enabled", 00:17:35.582 "thread": "nvmf_tgt_poll_group_000", 00:17:35.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:35.582 "listen_address": { 00:17:35.582 "trtype": "TCP", 00:17:35.582 "adrfam": "IPv4", 00:17:35.582 "traddr": "10.0.0.3", 00:17:35.582 "trsvcid": "4420" 00:17:35.582 }, 00:17:35.582 "peer_address": { 00:17:35.582 "trtype": "TCP", 00:17:35.582 "adrfam": "IPv4", 00:17:35.582 "traddr": "10.0.0.1", 00:17:35.582 "trsvcid": "51742" 00:17:35.582 }, 00:17:35.582 "auth": { 00:17:35.582 "state": "completed", 00:17:35.582 "digest": "sha512", 00:17:35.582 "dhgroup": "ffdhe8192" 00:17:35.582 } 00:17:35.582 } 00:17:35.582 ]' 00:17:35.582 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.582 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.582 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.582 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:35.582 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.840 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.840 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.840 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.098 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:17:36.098 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:17:36.663 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.663 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:36.663 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.663 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.663 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.663 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key3 00:17:36.663 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.663 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.663 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.663 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:36.663 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:36.921 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:36.921 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:36.921 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:36.921 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:36.921 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.921 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:36.921 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.921 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:36.921 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.921 14:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.179 request: 00:17:37.179 { 00:17:37.179 "name": "nvme0", 00:17:37.179 "trtype": "tcp", 00:17:37.179 "traddr": "10.0.0.3", 00:17:37.179 "adrfam": "ipv4", 00:17:37.179 "trsvcid": "4420", 00:17:37.179 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:37.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:37.179 "prchk_reftag": false, 00:17:37.179 "prchk_guard": false, 00:17:37.179 "hdgst": false, 00:17:37.179 "ddgst": false, 00:17:37.179 "dhchap_key": "key3", 00:17:37.179 "allow_unrecognized_csi": false, 00:17:37.179 "method": "bdev_nvme_attach_controller", 00:17:37.179 "req_id": 1 00:17:37.179 } 00:17:37.179 Got JSON-RPC error response 00:17:37.179 response: 00:17:37.179 { 00:17:37.179 "code": -5, 00:17:37.179 "message": "Input/output error" 00:17:37.179 } 00:17:37.179 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:37.179 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:37.179 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:37.179 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:37.179 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:37.179 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:37.179 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:37.179 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:37.179 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:37.179 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:37.179 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:37.179 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:37.179 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.179 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:37.179 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.179 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:37.179 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.179 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.437 request: 00:17:37.437 { 00:17:37.437 "name": "nvme0", 00:17:37.437 "trtype": "tcp", 00:17:37.437 "traddr": "10.0.0.3", 00:17:37.437 "adrfam": "ipv4", 00:17:37.437 "trsvcid": "4420", 00:17:37.437 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:37.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:37.437 "prchk_reftag": false, 00:17:37.437 "prchk_guard": false, 00:17:37.437 "hdgst": false, 00:17:37.437 "ddgst": false, 00:17:37.437 "dhchap_key": "key3", 00:17:37.437 "allow_unrecognized_csi": false, 00:17:37.437 "method": "bdev_nvme_attach_controller", 00:17:37.437 "req_id": 1 00:17:37.437 } 00:17:37.437 Got JSON-RPC error response 00:17:37.437 response: 00:17:37.437 { 00:17:37.437 "code": -5, 00:17:37.437 "message": "Input/output error" 00:17:37.437 } 00:17:37.437 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:37.437 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:37.437 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:37.437 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:37.437 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:37.437 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:37.437 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:37.437 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:37.437 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:37.437 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:37.695 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:37.695 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.695 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.695 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.695 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:37.695 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.695 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.695 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.695 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:37.695 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:37.695 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:37.695 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:37.695 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.695 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:37.695 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.695 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:37.695 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:37.695 14:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:37.952 request: 00:17:37.953 { 00:17:37.953 "name": "nvme0", 00:17:37.953 "trtype": "tcp", 00:17:37.953 "traddr": "10.0.0.3", 00:17:37.953 "adrfam": "ipv4", 00:17:37.953 "trsvcid": "4420", 00:17:37.953 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:37.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:37.953 "prchk_reftag": false, 00:17:37.953 "prchk_guard": false, 00:17:37.953 "hdgst": false, 00:17:37.953 "ddgst": false, 00:17:37.953 "dhchap_key": "key0", 00:17:37.953 "dhchap_ctrlr_key": "key1", 00:17:37.953 "allow_unrecognized_csi": false, 00:17:37.953 "method": "bdev_nvme_attach_controller", 00:17:37.953 "req_id": 1 00:17:37.953 } 00:17:37.953 Got JSON-RPC error response 00:17:37.953 response: 00:17:37.953 { 00:17:37.953 "code": -5, 00:17:37.953 "message": "Input/output error" 00:17:37.953 } 00:17:37.953 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:37.953 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:37.953 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:37.953 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:37.953 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:37.953 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:37.953 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:38.210 nvme0n1 00:17:38.210 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:38.210 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.210 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:38.468 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.468 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.468 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.725 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 00:17:38.726 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.726 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.726 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.726 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:38.726 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:38.726 14:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:39.665 nvme0n1 00:17:39.665 14:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:39.665 14:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.665 14:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:39.665 14:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.665 14:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:39.665 14:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.665 14:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.665 14:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.665 14:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:39.665 14:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.665 14:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:39.922 14:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.922 14:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:17:39.922 14:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid 0c7d476c-d4d7-4594-a48a-578d93697ffa -l 0 --dhchap-secret DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: --dhchap-ctrl-secret DHHC-1:03:ODdiNjA1OGJkMTRlNTY3ZWU0NWU3ZDFmMWMwNWZmODU1ZDNhNzA4NDE2NDgwN2I0MDkyNDVhNDk5YjYyYTMyNMc6MG4=: 00:17:40.495 14:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:40.495 14:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:40.495 14:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:40.495 14:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:40.495 14:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:40.495 14:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:40.495 14:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:40.495 14:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.495 14:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.751 14:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:40.751 14:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:40.751 14:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:40.751 14:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:40.751 14:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.751 14:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:40.751 14:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.751 14:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:40.751 14:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:40.751 14:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:41.316 request: 00:17:41.316 { 00:17:41.316 "name": "nvme0", 00:17:41.316 "trtype": "tcp", 00:17:41.316 "traddr": "10.0.0.3", 00:17:41.316 "adrfam": "ipv4", 00:17:41.316 "trsvcid": "4420", 00:17:41.316 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:41.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa", 00:17:41.316 "prchk_reftag": false, 00:17:41.316 "prchk_guard": false, 00:17:41.316 "hdgst": false, 00:17:41.316 "ddgst": false, 00:17:41.316 "dhchap_key": "key1", 00:17:41.316 "allow_unrecognized_csi": false, 00:17:41.316 "method": "bdev_nvme_attach_controller", 00:17:41.316 "req_id": 1 00:17:41.316 } 00:17:41.316 Got JSON-RPC error response 00:17:41.316 response: 00:17:41.316 { 00:17:41.316 "code": -5, 00:17:41.316 "message": "Input/output error" 00:17:41.316 } 00:17:41.316 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:41.316 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:41.316 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:41.316 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:41.316 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:41.316 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:41.316 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:42.249 nvme0n1 00:17:42.249 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:42.249 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:42.249 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.249 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.249 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.249 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.507 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:42.507 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.507 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.507 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.507 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:42.507 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:42.507 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:42.765 nvme0n1 00:17:42.765 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:42.765 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:42.765 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.021 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.021 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.021 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.279 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:43.279 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.279 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.279 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.279 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: '' 2s 00:17:43.279 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:43.279 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:43.279 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: 00:17:43.279 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:43.279 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:43.279 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:43.279 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: ]] 00:17:43.279 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjM1ZDFiZjQ0MTMwMjhiNWI3YzY2ZTkyZWE3Y2M2MzbHGhPh: 00:17:43.279 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:43.279 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:43.279 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:45.177 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:45.177 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:45.177 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:45.177 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:45.177 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:45.177 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:45.177 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:45.177 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:45.177 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.177 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.177 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.178 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: 2s 00:17:45.178 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:45.178 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:45.178 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:45.178 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: 00:17:45.178 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:45.178 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:45.178 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:45.178 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: ]] 00:17:45.178 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MWRlYmMyMzIwNDZiYTgxNjFmZWIyMWJhOGVkYmExNjJjOTEwMjdiN2Y0MGI1YWQ0A6J0kA==: 00:17:45.178 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:45.178 14:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:47.704 14:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:47.704 14:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:47.704 14:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:47.704 14:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:47.704 14:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:47.704 14:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:47.704 14:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:47.704 14:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.704 14:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:47.704 14:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.704 14:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.704 14:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.704 14:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:47.704 14:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:47.704 14:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:48.269 nvme0n1 00:17:48.269 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:48.269 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.269 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.269 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.269 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:48.269 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:48.860 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:48.860 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.860 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:48.860 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.860 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:48.860 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.860 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.860 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.860 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:48.860 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:49.118 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:49.118 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.118 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:49.377 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.377 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:49.377 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.377 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.377 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.377 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:49.377 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:49.377 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:49.377 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:49.377 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.377 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:49.377 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.377 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:49.377 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:49.942 request: 00:17:49.942 { 00:17:49.942 "name": "nvme0", 00:17:49.942 "dhchap_key": "key1", 00:17:49.942 "dhchap_ctrlr_key": "key3", 00:17:49.942 "method": "bdev_nvme_set_keys", 00:17:49.942 "req_id": 1 00:17:49.942 } 00:17:49.942 Got JSON-RPC error response 00:17:49.942 response: 00:17:49.942 { 00:17:49.942 "code": -13, 00:17:49.942 "message": "Permission denied" 00:17:49.942 } 00:17:49.942 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:49.942 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:49.942 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:49.942 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:49.942 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:49.942 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.942 14:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:50.199 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:50.199 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:51.130 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:51.130 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:51.130 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.387 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:51.387 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:51.387 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.388 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.388 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.388 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:51.388 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:51.388 14:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:52.320 nvme0n1 00:17:52.320 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:52.320 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.320 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.320 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.320 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:52.320 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:52.320 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:52.320 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:52.320 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.320 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:52.320 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.320 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:52.320 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:52.577 request: 00:17:52.577 { 00:17:52.577 "name": "nvme0", 00:17:52.577 "dhchap_key": "key2", 00:17:52.577 "dhchap_ctrlr_key": "key0", 00:17:52.577 "method": "bdev_nvme_set_keys", 00:17:52.577 "req_id": 1 00:17:52.577 } 00:17:52.577 Got JSON-RPC error response 00:17:52.577 response: 00:17:52.577 { 00:17:52.577 "code": -13, 00:17:52.577 "message": "Permission denied" 00:17:52.577 } 00:17:52.577 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:52.577 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:52.577 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:52.577 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:52.577 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:52.577 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.577 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:52.834 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:52.834 14:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:53.767 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:53.767 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:53.767 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.025 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:54.025 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:54.025 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:54.025 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 66238 00:17:54.025 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 66238 ']' 00:17:54.025 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 66238 00:17:54.025 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:54.025 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:54.025 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66238 00:17:54.025 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:54.025 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:54.025 killing process with pid 66238 00:17:54.025 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66238' 00:17:54.025 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 66238 00:17:54.025 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 66238 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:54.282 rmmod nvme_tcp 00:17:54.282 rmmod nvme_fabrics 00:17:54.282 rmmod nvme_keyring 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 68973 ']' 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 68973 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 68973 ']' 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 68973 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68973 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68973' 00:17:54.282 killing process with pid 68973 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 68973 00:17:54.282 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 68973 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:54.540 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.En4 /tmp/spdk.key-sha256.Voq /tmp/spdk.key-sha384.I9Q /tmp/spdk.key-sha512.rRN /tmp/spdk.key-sha512.fDr /tmp/spdk.key-sha384.eU2 /tmp/spdk.key-sha256.EiI '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:17:54.798 00:17:54.798 real 2m34.852s 00:17:54.798 user 6m4.884s 00:17:54.798 sys 0m20.104s 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.798 ************************************ 00:17:54.798 END TEST nvmf_auth_target 00:17:54.798 ************************************ 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:54.798 ************************************ 00:17:54.798 START TEST nvmf_bdevio_no_huge 00:17:54.798 ************************************ 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:54.798 * Looking for test storage... 00:17:54.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:54.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.798 --rc genhtml_branch_coverage=1 00:17:54.798 --rc genhtml_function_coverage=1 00:17:54.798 --rc genhtml_legend=1 00:17:54.798 --rc geninfo_all_blocks=1 00:17:54.798 --rc geninfo_unexecuted_blocks=1 00:17:54.798 00:17:54.798 ' 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:54.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.798 --rc genhtml_branch_coverage=1 00:17:54.798 --rc genhtml_function_coverage=1 00:17:54.798 --rc genhtml_legend=1 00:17:54.798 --rc geninfo_all_blocks=1 00:17:54.798 --rc geninfo_unexecuted_blocks=1 00:17:54.798 00:17:54.798 ' 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:54.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.798 --rc genhtml_branch_coverage=1 00:17:54.798 --rc genhtml_function_coverage=1 00:17:54.798 --rc genhtml_legend=1 00:17:54.798 --rc geninfo_all_blocks=1 00:17:54.798 --rc geninfo_unexecuted_blocks=1 00:17:54.798 00:17:54.798 ' 00:17:54.798 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:54.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.798 --rc genhtml_branch_coverage=1 00:17:54.798 --rc genhtml_function_coverage=1 00:17:54.798 --rc genhtml_legend=1 00:17:54.798 --rc geninfo_all_blocks=1 00:17:54.798 --rc geninfo_unexecuted_blocks=1 00:17:54.798 00:17:54.798 ' 00:17:54.799 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:54.799 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:54.799 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.799 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.799 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.799 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.799 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.799 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.799 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.799 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.799 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:55.057 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:55.057 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:55.058 Cannot find device "nvmf_init_br" 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:55.058 Cannot find device "nvmf_init_br2" 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:55.058 Cannot find device "nvmf_tgt_br" 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:55.058 Cannot find device "nvmf_tgt_br2" 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:17:55.058 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:55.058 Cannot find device "nvmf_init_br" 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:55.058 Cannot find device "nvmf_init_br2" 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:55.058 Cannot find device "nvmf_tgt_br" 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:55.058 Cannot find device "nvmf_tgt_br2" 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:55.058 Cannot find device "nvmf_br" 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:55.058 Cannot find device "nvmf_init_if" 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:55.058 Cannot find device "nvmf_init_if2" 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:55.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:55.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:55.058 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:55.316 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:55.316 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:17:55.316 00:17:55.316 --- 10.0.0.3 ping statistics --- 00:17:55.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.316 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:55.316 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:55.316 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:17:55.316 00:17:55.316 --- 10.0.0.4 ping statistics --- 00:17:55.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.316 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:55.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:55.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:55.316 00:17:55.316 --- 10.0.0.1 ping statistics --- 00:17:55.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.316 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:55.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:55.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 00:17:55.316 00:17:55.316 --- 10.0.0.2 ping statistics --- 00:17:55.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.316 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=69587 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 69587 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 69587 ']' 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:55.316 14:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:55.316 [2024-11-04 14:44:04.269675] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:17:55.316 [2024-11-04 14:44:04.269731] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:55.316 [2024-11-04 14:44:04.410053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:55.582 [2024-11-04 14:44:04.459134] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.582 [2024-11-04 14:44:04.459175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.582 [2024-11-04 14:44:04.459181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:55.582 [2024-11-04 14:44:04.459186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:55.582 [2024-11-04 14:44:04.459191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.582 [2024-11-04 14:44:04.459562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:55.582 [2024-11-04 14:44:04.460288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:55.582 [2024-11-04 14:44:04.460663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:55.582 [2024-11-04 14:44:04.460897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:55.582 [2024-11-04 14:44:04.465538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:56.147 [2024-11-04 14:44:05.172839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:56.147 Malloc0 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:56.147 [2024-11-04 14:44:05.212976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:56.147 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:56.148 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:56.148 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:56.148 { 00:17:56.148 "params": { 00:17:56.148 "name": "Nvme$subsystem", 00:17:56.148 "trtype": "$TEST_TRANSPORT", 00:17:56.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:56.148 "adrfam": "ipv4", 00:17:56.148 "trsvcid": "$NVMF_PORT", 00:17:56.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:56.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:56.148 "hdgst": ${hdgst:-false}, 00:17:56.148 "ddgst": ${ddgst:-false} 00:17:56.148 }, 00:17:56.148 "method": "bdev_nvme_attach_controller" 00:17:56.148 } 00:17:56.148 EOF 00:17:56.148 )") 00:17:56.148 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:56.148 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:56.148 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:56.148 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:56.148 "params": { 00:17:56.148 "name": "Nvme1", 00:17:56.148 "trtype": "tcp", 00:17:56.148 "traddr": "10.0.0.3", 00:17:56.148 "adrfam": "ipv4", 00:17:56.148 "trsvcid": "4420", 00:17:56.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:56.148 "hdgst": false, 00:17:56.148 "ddgst": false 00:17:56.148 }, 00:17:56.148 "method": "bdev_nvme_attach_controller" 00:17:56.148 }' 00:17:56.148 [2024-11-04 14:44:05.252959] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:17:56.148 [2024-11-04 14:44:05.253022] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid69623 ] 00:17:56.405 [2024-11-04 14:44:05.394896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:56.405 [2024-11-04 14:44:05.444641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.405 [2024-11-04 14:44:05.444715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.405 [2024-11-04 14:44:05.444717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.405 [2024-11-04 14:44:05.457892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:56.662 I/O targets: 00:17:56.662 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:56.662 00:17:56.662 00:17:56.662 CUnit - A unit testing framework for C - Version 2.1-3 00:17:56.662 http://cunit.sourceforge.net/ 00:17:56.662 00:17:56.662 00:17:56.662 Suite: bdevio tests on: Nvme1n1 00:17:56.662 Test: blockdev write read block ...passed 00:17:56.662 Test: blockdev write zeroes read block ...passed 00:17:56.662 Test: blockdev write zeroes read no split ...passed 00:17:56.662 Test: blockdev write zeroes read split ...passed 00:17:56.662 Test: blockdev write zeroes read split partial ...passed 00:17:56.662 Test: blockdev reset ...[2024-11-04 14:44:05.639734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:56.662 [2024-11-04 14:44:05.639820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x51c310 (9): Bad file descriptor 00:17:56.662 [2024-11-04 14:44:05.653628] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:56.662 passed 00:17:56.662 Test: blockdev write read 8 blocks ...passed 00:17:56.662 Test: blockdev write read size > 128k ...passed 00:17:56.662 Test: blockdev write read invalid size ...passed 00:17:56.662 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:56.662 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:56.662 Test: blockdev write read max offset ...passed 00:17:56.662 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:56.662 Test: blockdev writev readv 8 blocks ...passed 00:17:56.662 Test: blockdev writev readv 30 x 1block ...passed 00:17:56.662 Test: blockdev writev readv block ...passed 00:17:56.662 Test: blockdev writev readv size > 128k ...passed 00:17:56.662 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:56.662 Test: blockdev comparev and writev ...[2024-11-04 14:44:05.661398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:56.662 [2024-11-04 14:44:05.661743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.662 [2024-11-04 14:44:05.662004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:56.662 [2024-11-04 14:44:05.662220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:56.662 [2024-11-04 14:44:05.662762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:56.662 [2024-11-04 14:44:05.663061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:56.662 [2024-11-04 14:44:05.663273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:56.662 [2024-11-04 14:44:05.663499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:56.662 [2024-11-04 14:44:05.663943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:56.662 [2024-11-04 14:44:05.664214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:56.662 [2024-11-04 14:44:05.664407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:56.663 [2024-11-04 14:44:05.664597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:56.663 [2024-11-04 14:44:05.665056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:56.663 [2024-11-04 14:44:05.665286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:56.663 [2024-11-04 14:44:05.665477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:56.663 [2024-11-04 14:44:05.665537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:56.663 passed 00:17:56.663 Test: blockdev nvme passthru rw ...passed 00:17:56.663 Test: blockdev nvme passthru vendor specific ...[2024-11-04 14:44:05.666312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:56.663 [2024-11-04 14:44:05.666414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:56.663 [2024-11-04 14:44:05.666622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:56.663 [2024-11-04 14:44:05.666730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:56.663 [2024-11-04 14:44:05.666914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:56.663 [2024-11-04 14:44:05.667028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:56.663 [2024-11-04 14:44:05.667216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:56.663 passed 00:17:56.663 Test: blockdev nvme admin passthru ...[2024-11-04 14:44:05.667313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:56.663 passed 00:17:56.663 Test: blockdev copy ...passed 00:17:56.663 00:17:56.663 Run Summary: Type Total Ran Passed Failed Inactive 00:17:56.663 suites 1 1 n/a 0 0 00:17:56.663 tests 23 23 23 0 0 00:17:56.663 asserts 152 152 152 0 n/a 00:17:56.663 00:17:56.663 Elapsed time = 0.146 seconds 00:17:56.920 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.920 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.920 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:56.920 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.920 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:56.920 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:56.920 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:56.920 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:56.920 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:56.920 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:56.920 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:56.920 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:56.920 rmmod nvme_tcp 00:17:56.920 rmmod nvme_fabrics 00:17:56.920 rmmod nvme_keyring 00:17:56.920 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:56.920 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:56.920 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:56.920 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 69587 ']' 00:17:56.920 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 69587 00:17:56.920 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 69587 ']' 00:17:56.920 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 69587 00:17:56.920 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:17:56.920 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:56.920 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69587 00:17:56.920 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:17:56.920 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:17:56.920 killing process with pid 69587 00:17:56.920 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69587' 00:17:56.920 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 69587 00:17:56.920 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 69587 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:17:57.485 00:17:57.485 real 0m2.788s 00:17:57.485 user 0m8.419s 00:17:57.485 sys 0m1.021s 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:57.485 ************************************ 00:17:57.485 END TEST nvmf_bdevio_no_huge 00:17:57.485 ************************************ 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:57.485 ************************************ 00:17:57.485 START TEST nvmf_tls 00:17:57.485 ************************************ 00:17:57.485 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:57.744 * Looking for test storage... 00:17:57.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:57.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.744 --rc genhtml_branch_coverage=1 00:17:57.744 --rc genhtml_function_coverage=1 00:17:57.744 --rc genhtml_legend=1 00:17:57.744 --rc geninfo_all_blocks=1 00:17:57.744 --rc geninfo_unexecuted_blocks=1 00:17:57.744 00:17:57.744 ' 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:57.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.744 --rc genhtml_branch_coverage=1 00:17:57.744 --rc genhtml_function_coverage=1 00:17:57.744 --rc genhtml_legend=1 00:17:57.744 --rc geninfo_all_blocks=1 00:17:57.744 --rc geninfo_unexecuted_blocks=1 00:17:57.744 00:17:57.744 ' 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:57.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.744 --rc genhtml_branch_coverage=1 00:17:57.744 --rc genhtml_function_coverage=1 00:17:57.744 --rc genhtml_legend=1 00:17:57.744 --rc geninfo_all_blocks=1 00:17:57.744 --rc geninfo_unexecuted_blocks=1 00:17:57.744 00:17:57.744 ' 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:57.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.744 --rc genhtml_branch_coverage=1 00:17:57.744 --rc genhtml_function_coverage=1 00:17:57.744 --rc genhtml_legend=1 00:17:57.744 --rc geninfo_all_blocks=1 00:17:57.744 --rc geninfo_unexecuted_blocks=1 00:17:57.744 00:17:57.744 ' 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.744 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:57.745 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:57.745 Cannot find device "nvmf_init_br" 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:57.745 Cannot find device "nvmf_init_br2" 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:57.745 Cannot find device "nvmf_tgt_br" 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.745 Cannot find device "nvmf_tgt_br2" 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:57.745 Cannot find device "nvmf_init_br" 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:57.745 Cannot find device "nvmf_init_br2" 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:57.745 Cannot find device "nvmf_tgt_br" 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:57.745 Cannot find device "nvmf_tgt_br2" 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:57.745 Cannot find device "nvmf_br" 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:57.745 Cannot find device "nvmf_init_if" 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:17:57.745 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:58.003 Cannot find device "nvmf_init_if2" 00:17:58.004 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:17:58.004 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:58.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.004 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:17:58.004 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:58.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.004 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:17:58.004 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:58.004 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:58.004 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:58.004 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:58.004 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:58.004 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:58.004 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:58.004 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:58.004 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:58.004 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:58.004 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:58.004 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:58.004 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:58.004 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:17:58.004 00:17:58.004 --- 10.0.0.3 ping statistics --- 00:17:58.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.004 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:58.004 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:58.004 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.029 ms 00:17:58.004 00:17:58.004 --- 10.0.0.4 ping statistics --- 00:17:58.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.004 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:58.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:17:58.004 00:17:58.004 --- 10.0.0.1 ping statistics --- 00:17:58.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.004 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:58.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:17:58.004 00:17:58.004 --- 10.0.0.2 ping statistics --- 00:17:58.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.004 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=69842 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 69842 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 69842 ']' 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:58.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:58.004 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.004 [2024-11-04 14:44:07.143002] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:17:58.004 [2024-11-04 14:44:07.143055] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.262 [2024-11-04 14:44:07.285638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.262 [2024-11-04 14:44:07.319647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.262 [2024-11-04 14:44:07.319681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.262 [2024-11-04 14:44:07.319688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.262 [2024-11-04 14:44:07.319693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.262 [2024-11-04 14:44:07.319697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.262 [2024-11-04 14:44:07.319956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.195 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:59.195 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:59.195 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:59.195 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:59.195 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.195 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.195 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:59.195 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:59.195 true 00:17:59.195 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:59.195 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:59.453 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:59.453 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:59.453 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:59.710 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:59.710 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:59.968 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:59.968 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:59.968 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:59.968 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:59.968 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:00.226 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:00.226 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:00.226 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:00.226 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:00.483 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:00.483 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:00.483 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:00.739 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:00.739 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:00.739 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:00.739 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:00.739 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:00.995 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:00.995 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:01.252 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:01.252 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:01.252 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.q24wiLko5H 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.RBeItjjU1m 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.q24wiLko5H 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.RBeItjjU1m 00:18:01.253 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:01.509 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:01.774 [2024-11-04 14:44:10.883072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:02.031 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.q24wiLko5H 00:18:02.031 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.q24wiLko5H 00:18:02.031 14:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:02.031 [2024-11-04 14:44:11.114014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.031 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:02.289 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:02.547 [2024-11-04 14:44:11.482072] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:02.547 [2024-11-04 14:44:11.482228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:02.547 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:02.804 malloc0 00:18:02.804 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:03.062 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.q24wiLko5H 00:18:03.062 14:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:03.319 14:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.q24wiLko5H 00:18:15.508 Initializing NVMe Controllers 00:18:15.508 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:15.508 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:15.508 Initialization complete. Launching workers. 00:18:15.508 ======================================================== 00:18:15.508 Latency(us) 00:18:15.508 Device Information : IOPS MiB/s Average min max 00:18:15.508 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17817.66 69.60 3592.20 1113.03 4309.55 00:18:15.509 ======================================================== 00:18:15.509 Total : 17817.66 69.60 3592.20 1113.03 4309.55 00:18:15.509 00:18:15.509 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.q24wiLko5H 00:18:15.509 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:15.509 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:15.509 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:15.509 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.q24wiLko5H 00:18:15.509 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:15.509 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70085 00:18:15.509 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:15.509 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70085 /var/tmp/bdevperf.sock 00:18:15.509 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 70085 ']' 00:18:15.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.509 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.509 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:15.509 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.509 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:15.509 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.509 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:15.509 [2024-11-04 14:44:22.604954] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:18:15.509 [2024-11-04 14:44:22.605009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70085 ] 00:18:15.509 [2024-11-04 14:44:22.742349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.509 [2024-11-04 14:44:22.774323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.509 [2024-11-04 14:44:22.802663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:15.509 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:15.509 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:15.509 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.q24wiLko5H 00:18:15.509 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:15.509 [2024-11-04 14:44:23.875625] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.509 TLSTESTn1 00:18:15.509 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:15.509 Running I/O for 10 seconds... 00:18:17.006 7039.00 IOPS, 27.50 MiB/s [2024-11-04T14:44:27.133Z] 7048.50 IOPS, 27.53 MiB/s [2024-11-04T14:44:28.068Z] 7064.00 IOPS, 27.59 MiB/s [2024-11-04T14:44:29.440Z] 7081.50 IOPS, 27.66 MiB/s [2024-11-04T14:44:30.373Z] 7091.20 IOPS, 27.70 MiB/s [2024-11-04T14:44:31.308Z] 7089.33 IOPS, 27.69 MiB/s [2024-11-04T14:44:32.242Z] 7096.43 IOPS, 27.72 MiB/s [2024-11-04T14:44:33.175Z] 7087.38 IOPS, 27.69 MiB/s [2024-11-04T14:44:34.109Z] 7084.67 IOPS, 27.67 MiB/s [2024-11-04T14:44:34.109Z] 7082.90 IOPS, 27.67 MiB/s 00:18:24.969 Latency(us) 00:18:24.969 [2024-11-04T14:44:34.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.969 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:24.969 Verification LBA range: start 0x0 length 0x2000 00:18:24.969 TLSTESTn1 : 10.01 7088.55 27.69 0.00 0.00 18029.19 3503.66 15728.64 00:18:24.969 [2024-11-04T14:44:34.109Z] =================================================================================================================== 00:18:24.969 [2024-11-04T14:44:34.109Z] Total : 7088.55 27.69 0.00 0.00 18029.19 3503.66 15728.64 00:18:24.969 { 00:18:24.969 "results": [ 00:18:24.969 { 00:18:24.969 "job": "TLSTESTn1", 00:18:24.969 "core_mask": "0x4", 00:18:24.969 "workload": "verify", 00:18:24.969 "status": "finished", 00:18:24.969 "verify_range": { 00:18:24.969 "start": 0, 00:18:24.969 "length": 8192 00:18:24.969 }, 00:18:24.969 "queue_depth": 128, 00:18:24.969 "io_size": 4096, 00:18:24.969 "runtime": 10.009657, 00:18:24.969 "iops": 7088.554582839352, 00:18:24.969 "mibps": 27.68966633921622, 00:18:24.969 "io_failed": 0, 00:18:24.969 "io_timeout": 0, 00:18:24.969 "avg_latency_us": 18029.19497572642, 00:18:24.969 "min_latency_us": 3503.6553846153847, 00:18:24.969 "max_latency_us": 15728.64 00:18:24.969 } 00:18:24.969 ], 00:18:24.969 "core_count": 1 00:18:24.969 } 00:18:24.969 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:24.969 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 70085 00:18:24.969 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 70085 ']' 00:18:24.969 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 70085 00:18:24.969 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:24.969 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:24.969 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70085 00:18:25.227 killing process with pid 70085 00:18:25.227 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.227 00:18:25.227 Latency(us) 00:18:25.227 [2024-11-04T14:44:34.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.227 [2024-11-04T14:44:34.367Z] =================================================================================================================== 00:18:25.227 [2024-11-04T14:44:34.367Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70085' 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 70085 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 70085 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RBeItjjU1m 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RBeItjjU1m 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RBeItjjU1m 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RBeItjjU1m 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70219 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70219 /var/tmp/bdevperf.sock 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:25.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 70219 ']' 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:25.227 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.227 [2024-11-04 14:44:34.246801] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:18:25.227 [2024-11-04 14:44:34.247276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70219 ] 00:18:25.503 [2024-11-04 14:44:34.380386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.503 [2024-11-04 14:44:34.412252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.503 [2024-11-04 14:44:34.441174] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:25.503 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:25.503 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:25.503 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RBeItjjU1m 00:18:25.776 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:25.776 [2024-11-04 14:44:34.847321] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.776 [2024-11-04 14:44:34.852530] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:25.776 [2024-11-04 14:44:34.853067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea5fb0 (107): Transport endpoint is not connected 00:18:25.776 [2024-11-04 14:44:34.854059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea5fb0 (9): Bad file descriptor 00:18:25.776 [2024-11-04 14:44:34.855057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:25.776 [2024-11-04 14:44:34.855071] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:25.776 [2024-11-04 14:44:34.855077] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:25.776 [2024-11-04 14:44:34.855084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:25.776 request: 00:18:25.776 { 00:18:25.776 "name": "TLSTEST", 00:18:25.776 "trtype": "tcp", 00:18:25.776 "traddr": "10.0.0.3", 00:18:25.776 "adrfam": "ipv4", 00:18:25.776 "trsvcid": "4420", 00:18:25.776 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.776 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:25.776 "prchk_reftag": false, 00:18:25.776 "prchk_guard": false, 00:18:25.776 "hdgst": false, 00:18:25.776 "ddgst": false, 00:18:25.776 "psk": "key0", 00:18:25.776 "allow_unrecognized_csi": false, 00:18:25.776 "method": "bdev_nvme_attach_controller", 00:18:25.776 "req_id": 1 00:18:25.776 } 00:18:25.776 Got JSON-RPC error response 00:18:25.776 response: 00:18:25.776 { 00:18:25.776 "code": -5, 00:18:25.776 "message": "Input/output error" 00:18:25.776 } 00:18:25.776 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70219 00:18:25.776 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 70219 ']' 00:18:25.776 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 70219 00:18:25.776 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:25.776 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:25.776 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70219 00:18:25.776 killing process with pid 70219 00:18:25.776 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.776 00:18:25.776 Latency(us) 00:18:25.776 [2024-11-04T14:44:34.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.776 [2024-11-04T14:44:34.916Z] =================================================================================================================== 00:18:25.776 [2024-11-04T14:44:34.916Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:25.776 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:25.776 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:25.776 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70219' 00:18:25.776 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 70219 00:18:25.776 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 70219 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.q24wiLko5H 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.q24wiLko5H 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.q24wiLko5H 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.q24wiLko5H 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70235 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70235 /var/tmp/bdevperf.sock 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 70235 ']' 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:26.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:26.034 14:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.034 [2024-11-04 14:44:35.022961] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:18:26.034 [2024-11-04 14:44:35.023024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70235 ] 00:18:26.035 [2024-11-04 14:44:35.154937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.294 [2024-11-04 14:44:35.187199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.294 [2024-11-04 14:44:35.215615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:26.861 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:26.861 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:26.861 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.q24wiLko5H 00:18:27.119 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:27.378 [2024-11-04 14:44:36.269554] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:27.378 [2024-11-04 14:44:36.273408] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:27.378 [2024-11-04 14:44:36.273434] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:27.378 [2024-11-04 14:44:36.273463] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:27.378 [2024-11-04 14:44:36.274237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198dfb0 (107): Transport endpoint is not connected 00:18:27.378 [2024-11-04 14:44:36.275228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198dfb0 (9): Bad file descriptor 00:18:27.378 [2024-11-04 14:44:36.276227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:27.378 [2024-11-04 14:44:36.276242] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:27.378 [2024-11-04 14:44:36.276247] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:27.378 [2024-11-04 14:44:36.276254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:27.378 request: 00:18:27.378 { 00:18:27.378 "name": "TLSTEST", 00:18:27.378 "trtype": "tcp", 00:18:27.378 "traddr": "10.0.0.3", 00:18:27.378 "adrfam": "ipv4", 00:18:27.378 "trsvcid": "4420", 00:18:27.378 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.378 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:27.378 "prchk_reftag": false, 00:18:27.378 "prchk_guard": false, 00:18:27.378 "hdgst": false, 00:18:27.378 "ddgst": false, 00:18:27.378 "psk": "key0", 00:18:27.378 "allow_unrecognized_csi": false, 00:18:27.378 "method": "bdev_nvme_attach_controller", 00:18:27.378 "req_id": 1 00:18:27.378 } 00:18:27.378 Got JSON-RPC error response 00:18:27.378 response: 00:18:27.378 { 00:18:27.378 "code": -5, 00:18:27.378 "message": "Input/output error" 00:18:27.378 } 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70235 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 70235 ']' 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 70235 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70235 00:18:27.378 killing process with pid 70235 00:18:27.378 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.378 00:18:27.378 Latency(us) 00:18:27.378 [2024-11-04T14:44:36.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.378 [2024-11-04T14:44:36.518Z] =================================================================================================================== 00:18:27.378 [2024-11-04T14:44:36.518Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70235' 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 70235 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 70235 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.q24wiLko5H 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.q24wiLko5H 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.q24wiLko5H 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.q24wiLko5H 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70263 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70263 /var/tmp/bdevperf.sock 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 70263 ']' 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.378 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:27.379 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.379 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:27.379 14:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.379 [2024-11-04 14:44:36.466743] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:18:27.379 [2024-11-04 14:44:36.466832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70263 ] 00:18:27.636 [2024-11-04 14:44:36.609895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.636 [2024-11-04 14:44:36.641888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.636 [2024-11-04 14:44:36.669799] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:28.569 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:28.569 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:28.569 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.q24wiLko5H 00:18:28.569 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:28.827 [2024-11-04 14:44:37.739631] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:28.827 [2024-11-04 14:44:37.743551] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:28.827 [2024-11-04 14:44:37.743577] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:28.827 [2024-11-04 14:44:37.743614] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:28.827 [2024-11-04 14:44:37.744375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x549fb0 (107): Transport endpoint is not connected 00:18:28.827 [2024-11-04 14:44:37.745367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x549fb0 (9): Bad file descriptor 00:18:28.827 [2024-11-04 14:44:37.746366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:28.827 [2024-11-04 14:44:37.746381] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:28.827 [2024-11-04 14:44:37.746388] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:28.827 [2024-11-04 14:44:37.746395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:28.827 request: 00:18:28.827 { 00:18:28.827 "name": "TLSTEST", 00:18:28.827 "trtype": "tcp", 00:18:28.827 "traddr": "10.0.0.3", 00:18:28.827 "adrfam": "ipv4", 00:18:28.827 "trsvcid": "4420", 00:18:28.827 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:28.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.827 "prchk_reftag": false, 00:18:28.827 "prchk_guard": false, 00:18:28.827 "hdgst": false, 00:18:28.827 "ddgst": false, 00:18:28.827 "psk": "key0", 00:18:28.827 "allow_unrecognized_csi": false, 00:18:28.827 "method": "bdev_nvme_attach_controller", 00:18:28.827 "req_id": 1 00:18:28.827 } 00:18:28.827 Got JSON-RPC error response 00:18:28.827 response: 00:18:28.827 { 00:18:28.827 "code": -5, 00:18:28.827 "message": "Input/output error" 00:18:28.827 } 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70263 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 70263 ']' 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 70263 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70263 00:18:28.827 killing process with pid 70263 00:18:28.827 Received shutdown signal, test time was about 10.000000 seconds 00:18:28.827 00:18:28.827 Latency(us) 00:18:28.827 [2024-11-04T14:44:37.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.827 [2024-11-04T14:44:37.967Z] =================================================================================================================== 00:18:28.827 [2024-11-04T14:44:37.967Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70263' 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 70263 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 70263 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70292 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70292 /var/tmp/bdevperf.sock 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 70292 ']' 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:28.827 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.828 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:28.828 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.828 [2024-11-04 14:44:37.913184] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:18:28.828 [2024-11-04 14:44:37.913250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70292 ] 00:18:29.086 [2024-11-04 14:44:38.047376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.086 [2024-11-04 14:44:38.088722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.086 [2024-11-04 14:44:38.120613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:29.651 14:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:29.651 14:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:29.651 14:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:29.909 [2024-11-04 14:44:38.946296] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:29.909 [2024-11-04 14:44:38.946336] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:29.909 request: 00:18:29.909 { 00:18:29.909 "name": "key0", 00:18:29.909 "path": "", 00:18:29.909 "method": "keyring_file_add_key", 00:18:29.909 "req_id": 1 00:18:29.909 } 00:18:29.909 Got JSON-RPC error response 00:18:29.909 response: 00:18:29.909 { 00:18:29.909 "code": -1, 00:18:29.909 "message": "Operation not permitted" 00:18:29.909 } 00:18:29.909 14:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:30.166 [2024-11-04 14:44:39.114407] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.166 [2024-11-04 14:44:39.114455] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:30.166 request: 00:18:30.166 { 00:18:30.166 "name": "TLSTEST", 00:18:30.166 "trtype": "tcp", 00:18:30.166 "traddr": "10.0.0.3", 00:18:30.166 "adrfam": "ipv4", 00:18:30.166 "trsvcid": "4420", 00:18:30.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.166 "prchk_reftag": false, 00:18:30.166 "prchk_guard": false, 00:18:30.166 "hdgst": false, 00:18:30.166 "ddgst": false, 00:18:30.166 "psk": "key0", 00:18:30.166 "allow_unrecognized_csi": false, 00:18:30.166 "method": "bdev_nvme_attach_controller", 00:18:30.166 "req_id": 1 00:18:30.166 } 00:18:30.166 Got JSON-RPC error response 00:18:30.166 response: 00:18:30.166 { 00:18:30.166 "code": -126, 00:18:30.166 "message": "Required key not available" 00:18:30.166 } 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70292 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 70292 ']' 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 70292 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70292 00:18:30.166 killing process with pid 70292 00:18:30.166 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.166 00:18:30.166 Latency(us) 00:18:30.166 [2024-11-04T14:44:39.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.166 [2024-11-04T14:44:39.306Z] =================================================================================================================== 00:18:30.166 [2024-11-04T14:44:39.306Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70292' 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 70292 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 70292 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 69842 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 69842 ']' 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 69842 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69842 00:18:30.166 killing process with pid 69842 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69842' 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 69842 00:18:30.166 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 69842 00:18:30.423 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:30.423 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:30.423 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:30.423 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:30.423 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:30.423 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:30.423 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:30.423 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:30.423 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:30.423 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.H8YwNrhVaH 00:18:30.423 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:30.423 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.H8YwNrhVaH 00:18:30.424 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:30.424 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:30.424 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:30.424 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.424 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70325 00:18:30.424 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70325 00:18:30.424 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:30.424 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 70325 ']' 00:18:30.424 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.424 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:30.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.424 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.424 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:30.424 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.424 [2024-11-04 14:44:39.483165] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:18:30.424 [2024-11-04 14:44:39.483226] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.680 [2024-11-04 14:44:39.623998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.680 [2024-11-04 14:44:39.657788] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.680 [2024-11-04 14:44:39.657952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.680 [2024-11-04 14:44:39.658009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.680 [2024-11-04 14:44:39.658124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.680 [2024-11-04 14:44:39.658170] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.680 [2024-11-04 14:44:39.658443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.680 [2024-11-04 14:44:39.687578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:31.245 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:31.245 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:31.245 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:31.245 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:31.245 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.245 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.245 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.H8YwNrhVaH 00:18:31.245 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.H8YwNrhVaH 00:18:31.245 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:31.502 [2024-11-04 14:44:40.534268] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.502 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:31.759 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:32.016 [2024-11-04 14:44:40.906332] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:32.016 [2024-11-04 14:44:40.906493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:32.016 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:32.016 malloc0 00:18:32.016 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:32.313 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.H8YwNrhVaH 00:18:32.571 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:32.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.829 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H8YwNrhVaH 00:18:32.829 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:32.830 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:32.830 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:32.830 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.H8YwNrhVaH 00:18:32.830 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:32.830 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70380 00:18:32.830 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:32.830 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:32.830 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70380 /var/tmp/bdevperf.sock 00:18:32.830 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 70380 ']' 00:18:32.830 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.830 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:32.830 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.830 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:32.830 14:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.830 [2024-11-04 14:44:41.772331] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:18:32.830 [2024-11-04 14:44:41.772395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70380 ] 00:18:32.830 [2024-11-04 14:44:41.913231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.830 [2024-11-04 14:44:41.950238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.088 [2024-11-04 14:44:41.982073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:33.653 14:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:33.653 14:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:33.653 14:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H8YwNrhVaH 00:18:33.910 14:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:33.910 [2024-11-04 14:44:43.033825] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:34.169 TLSTESTn1 00:18:34.169 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:34.169 Running I/O for 10 seconds... 00:18:36.105 6342.00 IOPS, 24.77 MiB/s [2024-11-04T14:44:46.622Z] 6356.50 IOPS, 24.83 MiB/s [2024-11-04T14:44:47.566Z] 6269.33 IOPS, 24.49 MiB/s [2024-11-04T14:44:48.501Z] 6224.50 IOPS, 24.31 MiB/s [2024-11-04T14:44:49.436Z] 6125.00 IOPS, 23.93 MiB/s [2024-11-04T14:44:50.369Z] 6057.67 IOPS, 23.66 MiB/s [2024-11-04T14:44:51.301Z] 6023.57 IOPS, 23.53 MiB/s [2024-11-04T14:44:52.272Z] 6091.00 IOPS, 23.79 MiB/s [2024-11-04T14:44:53.647Z] 6109.11 IOPS, 23.86 MiB/s [2024-11-04T14:44:53.647Z] 6107.80 IOPS, 23.86 MiB/s 00:18:44.507 Latency(us) 00:18:44.507 [2024-11-04T14:44:53.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.507 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:44.507 Verification LBA range: start 0x0 length 0x2000 00:18:44.507 TLSTESTn1 : 10.02 6110.62 23.87 0.00 0.00 20909.20 5545.35 22483.89 00:18:44.507 [2024-11-04T14:44:53.647Z] =================================================================================================================== 00:18:44.507 [2024-11-04T14:44:53.647Z] Total : 6110.62 23.87 0.00 0.00 20909.20 5545.35 22483.89 00:18:44.507 { 00:18:44.507 "results": [ 00:18:44.507 { 00:18:44.507 "job": "TLSTESTn1", 00:18:44.507 "core_mask": "0x4", 00:18:44.507 "workload": "verify", 00:18:44.507 "status": "finished", 00:18:44.507 "verify_range": { 00:18:44.507 "start": 0, 00:18:44.507 "length": 8192 00:18:44.507 }, 00:18:44.507 "queue_depth": 128, 00:18:44.507 "io_size": 4096, 00:18:44.507 "runtime": 10.016, 00:18:44.507 "iops": 6110.623003194888, 00:18:44.507 "mibps": 23.86962110623003, 00:18:44.507 "io_failed": 0, 00:18:44.507 "io_timeout": 0, 00:18:44.507 "avg_latency_us": 20909.203916888287, 00:18:44.507 "min_latency_us": 5545.3538461538465, 00:18:44.507 "max_latency_us": 22483.88923076923 00:18:44.507 } 00:18:44.507 ], 00:18:44.507 "core_count": 1 00:18:44.507 } 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 70380 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 70380 ']' 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 70380 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70380 00:18:44.507 killing process with pid 70380 00:18:44.507 Received shutdown signal, test time was about 10.000000 seconds 00:18:44.507 00:18:44.507 Latency(us) 00:18:44.507 [2024-11-04T14:44:53.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.507 [2024-11-04T14:44:53.647Z] =================================================================================================================== 00:18:44.507 [2024-11-04T14:44:53.647Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70380' 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 70380 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 70380 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.H8YwNrhVaH 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H8YwNrhVaH 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H8YwNrhVaH 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H8YwNrhVaH 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.H8YwNrhVaH 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70516 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70516 /var/tmp/bdevperf.sock 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 70516 ']' 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:44.507 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.507 [2024-11-04 14:44:53.412880] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:18:44.507 [2024-11-04 14:44:53.412944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70516 ] 00:18:44.507 [2024-11-04 14:44:53.544940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.507 [2024-11-04 14:44:53.587269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.507 [2024-11-04 14:44:53.620464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:44.782 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:44.782 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:44.782 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H8YwNrhVaH 00:18:44.782 [2024-11-04 14:44:53.858374] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.H8YwNrhVaH': 0100666 00:18:44.782 [2024-11-04 14:44:53.858422] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:44.782 request: 00:18:44.782 { 00:18:44.782 "name": "key0", 00:18:44.782 "path": "/tmp/tmp.H8YwNrhVaH", 00:18:44.782 "method": "keyring_file_add_key", 00:18:44.782 "req_id": 1 00:18:44.782 } 00:18:44.782 Got JSON-RPC error response 00:18:44.782 response: 00:18:44.782 { 00:18:44.782 "code": -1, 00:18:44.782 "message": "Operation not permitted" 00:18:44.782 } 00:18:44.782 14:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:45.039 [2024-11-04 14:44:54.022462] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:45.040 [2024-11-04 14:44:54.022506] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:45.040 request: 00:18:45.040 { 00:18:45.040 "name": "TLSTEST", 00:18:45.040 "trtype": "tcp", 00:18:45.040 "traddr": "10.0.0.3", 00:18:45.040 "adrfam": "ipv4", 00:18:45.040 "trsvcid": "4420", 00:18:45.040 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.040 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.040 "prchk_reftag": false, 00:18:45.040 "prchk_guard": false, 00:18:45.040 "hdgst": false, 00:18:45.040 "ddgst": false, 00:18:45.040 "psk": "key0", 00:18:45.040 "allow_unrecognized_csi": false, 00:18:45.040 "method": "bdev_nvme_attach_controller", 00:18:45.040 "req_id": 1 00:18:45.040 } 00:18:45.040 Got JSON-RPC error response 00:18:45.040 response: 00:18:45.040 { 00:18:45.040 "code": -126, 00:18:45.040 "message": "Required key not available" 00:18:45.040 } 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70516 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 70516 ']' 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 70516 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70516 00:18:45.040 killing process with pid 70516 00:18:45.040 Received shutdown signal, test time was about 10.000000 seconds 00:18:45.040 00:18:45.040 Latency(us) 00:18:45.040 [2024-11-04T14:44:54.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.040 [2024-11-04T14:44:54.180Z] =================================================================================================================== 00:18:45.040 [2024-11-04T14:44:54.180Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70516' 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 70516 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 70516 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 70325 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 70325 ']' 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 70325 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:45.040 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70325 00:18:45.297 killing process with pid 70325 00:18:45.297 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:45.298 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:45.298 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70325' 00:18:45.298 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 70325 00:18:45.298 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 70325 00:18:45.298 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:45.298 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:45.298 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:45.298 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.298 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70542 00:18:45.298 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70542 00:18:45.298 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 70542 ']' 00:18:45.298 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.298 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:45.298 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:45.298 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.298 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:45.298 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.298 [2024-11-04 14:44:54.339536] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:18:45.298 [2024-11-04 14:44:54.339593] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.555 [2024-11-04 14:44:54.481302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.555 [2024-11-04 14:44:54.515622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.555 [2024-11-04 14:44:54.515664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.555 [2024-11-04 14:44:54.515671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.555 [2024-11-04 14:44:54.515675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.555 [2024-11-04 14:44:54.515680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.555 [2024-11-04 14:44:54.515939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.555 [2024-11-04 14:44:54.546204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:45.555 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:45.555 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:45.555 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:45.555 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:45.555 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.555 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.555 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.H8YwNrhVaH 00:18:45.555 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:45.555 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.H8YwNrhVaH 00:18:45.555 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:18:45.555 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:45.555 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:18:45.555 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:45.555 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.H8YwNrhVaH 00:18:45.555 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.H8YwNrhVaH 00:18:45.555 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:45.813 [2024-11-04 14:44:54.805581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.813 14:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:46.070 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:46.328 [2024-11-04 14:44:55.213646] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:46.328 [2024-11-04 14:44:55.213808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:46.328 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:46.328 malloc0 00:18:46.328 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:46.585 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.H8YwNrhVaH 00:18:46.843 [2024-11-04 14:44:55.900117] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.H8YwNrhVaH': 0100666 00:18:46.843 [2024-11-04 14:44:55.900158] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:46.843 request: 00:18:46.843 { 00:18:46.843 "name": "key0", 00:18:46.843 "path": "/tmp/tmp.H8YwNrhVaH", 00:18:46.843 "method": "keyring_file_add_key", 00:18:46.843 "req_id": 1 00:18:46.843 } 00:18:46.843 Got JSON-RPC error response 00:18:46.843 response: 00:18:46.843 { 00:18:46.843 "code": -1, 00:18:46.843 "message": "Operation not permitted" 00:18:46.843 } 00:18:46.843 14:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:47.101 [2024-11-04 14:44:56.104157] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:47.101 [2024-11-04 14:44:56.104197] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:47.101 request: 00:18:47.101 { 00:18:47.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.101 "host": "nqn.2016-06.io.spdk:host1", 00:18:47.101 "psk": "key0", 00:18:47.101 "method": "nvmf_subsystem_add_host", 00:18:47.101 "req_id": 1 00:18:47.101 } 00:18:47.101 Got JSON-RPC error response 00:18:47.101 response: 00:18:47.101 { 00:18:47.101 "code": -32603, 00:18:47.101 "message": "Internal error" 00:18:47.101 } 00:18:47.101 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:47.101 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:47.101 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:47.101 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:47.101 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 70542 00:18:47.101 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 70542 ']' 00:18:47.101 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 70542 00:18:47.101 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:47.101 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:47.101 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70542 00:18:47.101 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:47.101 killing process with pid 70542 00:18:47.101 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:47.101 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70542' 00:18:47.101 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 70542 00:18:47.101 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 70542 00:18:47.358 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.H8YwNrhVaH 00:18:47.358 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:47.358 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:47.358 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:47.358 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.358 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70598 00:18:47.358 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:47.358 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70598 00:18:47.358 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 70598 ']' 00:18:47.358 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.358 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:47.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.358 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.358 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:47.358 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.358 [2024-11-04 14:44:56.292150] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:18:47.358 [2024-11-04 14:44:56.292202] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.358 [2024-11-04 14:44:56.426654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.358 [2024-11-04 14:44:56.460245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.358 [2024-11-04 14:44:56.460289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.358 [2024-11-04 14:44:56.460295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.358 [2024-11-04 14:44:56.460301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.358 [2024-11-04 14:44:56.460305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.358 [2024-11-04 14:44:56.460564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.358 [2024-11-04 14:44:56.489948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:48.291 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:48.291 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:48.291 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:48.291 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:48.291 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.291 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.291 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.H8YwNrhVaH 00:18:48.291 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.H8YwNrhVaH 00:18:48.291 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:48.291 [2024-11-04 14:44:57.364914] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.291 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:48.549 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:48.806 [2024-11-04 14:44:57.764966] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:48.806 [2024-11-04 14:44:57.765121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:48.806 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:49.064 malloc0 00:18:49.064 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:49.064 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.H8YwNrhVaH 00:18:49.323 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:49.581 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=70649 00:18:49.581 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:49.581 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:49.581 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 70649 /var/tmp/bdevperf.sock 00:18:49.581 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 70649 ']' 00:18:49.581 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.581 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:49.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.581 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.581 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:49.581 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.581 [2024-11-04 14:44:58.584121] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:18:49.581 [2024-11-04 14:44:58.584186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70649 ] 00:18:49.838 [2024-11-04 14:44:58.722784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.838 [2024-11-04 14:44:58.758064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.838 [2024-11-04 14:44:58.788023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:50.403 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:50.403 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:50.403 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H8YwNrhVaH 00:18:50.664 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:50.921 [2024-11-04 14:44:59.838324] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:50.921 TLSTESTn1 00:18:50.921 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:18:51.179 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:51.179 "subsystems": [ 00:18:51.179 { 00:18:51.179 "subsystem": "keyring", 00:18:51.179 "config": [ 00:18:51.179 { 00:18:51.179 "method": "keyring_file_add_key", 00:18:51.179 "params": { 00:18:51.179 "name": "key0", 00:18:51.179 "path": "/tmp/tmp.H8YwNrhVaH" 00:18:51.179 } 00:18:51.179 } 00:18:51.179 ] 00:18:51.179 }, 00:18:51.179 { 00:18:51.179 "subsystem": "iobuf", 00:18:51.179 "config": [ 00:18:51.179 { 00:18:51.179 "method": "iobuf_set_options", 00:18:51.179 "params": { 00:18:51.179 "small_pool_count": 8192, 00:18:51.179 "large_pool_count": 1024, 00:18:51.179 "small_bufsize": 8192, 00:18:51.179 "large_bufsize": 135168, 00:18:51.179 "enable_numa": false 00:18:51.179 } 00:18:51.179 } 00:18:51.179 ] 00:18:51.179 }, 00:18:51.179 { 00:18:51.179 "subsystem": "sock", 00:18:51.179 "config": [ 00:18:51.179 { 00:18:51.179 "method": "sock_set_default_impl", 00:18:51.179 "params": { 00:18:51.179 "impl_name": "uring" 00:18:51.179 } 00:18:51.179 }, 00:18:51.179 { 00:18:51.179 "method": "sock_impl_set_options", 00:18:51.179 "params": { 00:18:51.179 "impl_name": "ssl", 00:18:51.179 "recv_buf_size": 4096, 00:18:51.179 "send_buf_size": 4096, 00:18:51.179 "enable_recv_pipe": true, 00:18:51.179 "enable_quickack": false, 00:18:51.179 "enable_placement_id": 0, 00:18:51.179 "enable_zerocopy_send_server": true, 00:18:51.179 "enable_zerocopy_send_client": false, 00:18:51.179 "zerocopy_threshold": 0, 00:18:51.179 "tls_version": 0, 00:18:51.179 "enable_ktls": false 00:18:51.179 } 00:18:51.179 }, 00:18:51.179 { 00:18:51.179 "method": "sock_impl_set_options", 00:18:51.179 "params": { 00:18:51.179 "impl_name": "posix", 00:18:51.179 "recv_buf_size": 2097152, 00:18:51.179 "send_buf_size": 2097152, 00:18:51.179 "enable_recv_pipe": true, 00:18:51.179 "enable_quickack": false, 00:18:51.179 "enable_placement_id": 0, 00:18:51.179 "enable_zerocopy_send_server": true, 00:18:51.179 "enable_zerocopy_send_client": false, 00:18:51.179 "zerocopy_threshold": 0, 00:18:51.179 "tls_version": 0, 00:18:51.179 "enable_ktls": false 00:18:51.179 } 00:18:51.179 }, 00:18:51.179 { 00:18:51.179 "method": "sock_impl_set_options", 00:18:51.179 "params": { 00:18:51.179 "impl_name": "uring", 00:18:51.179 "recv_buf_size": 2097152, 00:18:51.179 "send_buf_size": 2097152, 00:18:51.179 "enable_recv_pipe": true, 00:18:51.179 "enable_quickack": false, 00:18:51.179 "enable_placement_id": 0, 00:18:51.179 "enable_zerocopy_send_server": false, 00:18:51.179 "enable_zerocopy_send_client": false, 00:18:51.179 "zerocopy_threshold": 0, 00:18:51.179 "tls_version": 0, 00:18:51.179 "enable_ktls": false 00:18:51.179 } 00:18:51.179 } 00:18:51.179 ] 00:18:51.179 }, 00:18:51.179 { 00:18:51.179 "subsystem": "vmd", 00:18:51.179 "config": [] 00:18:51.179 }, 00:18:51.179 { 00:18:51.179 "subsystem": "accel", 00:18:51.179 "config": [ 00:18:51.179 { 00:18:51.179 "method": "accel_set_options", 00:18:51.179 "params": { 00:18:51.179 "small_cache_size": 128, 00:18:51.179 "large_cache_size": 16, 00:18:51.179 "task_count": 2048, 00:18:51.179 "sequence_count": 2048, 00:18:51.179 "buf_count": 2048 00:18:51.179 } 00:18:51.179 } 00:18:51.179 ] 00:18:51.179 }, 00:18:51.179 { 00:18:51.179 "subsystem": "bdev", 00:18:51.179 "config": [ 00:18:51.179 { 00:18:51.180 "method": "bdev_set_options", 00:18:51.180 "params": { 00:18:51.180 "bdev_io_pool_size": 65535, 00:18:51.180 "bdev_io_cache_size": 256, 00:18:51.180 "bdev_auto_examine": true, 00:18:51.180 "iobuf_small_cache_size": 128, 00:18:51.180 "iobuf_large_cache_size": 16 00:18:51.180 } 00:18:51.180 }, 00:18:51.180 { 00:18:51.180 "method": "bdev_raid_set_options", 00:18:51.180 "params": { 00:18:51.180 "process_window_size_kb": 1024, 00:18:51.180 "process_max_bandwidth_mb_sec": 0 00:18:51.180 } 00:18:51.180 }, 00:18:51.180 { 00:18:51.180 "method": "bdev_iscsi_set_options", 00:18:51.180 "params": { 00:18:51.180 "timeout_sec": 30 00:18:51.180 } 00:18:51.180 }, 00:18:51.180 { 00:18:51.180 "method": "bdev_nvme_set_options", 00:18:51.180 "params": { 00:18:51.180 "action_on_timeout": "none", 00:18:51.180 "timeout_us": 0, 00:18:51.180 "timeout_admin_us": 0, 00:18:51.180 "keep_alive_timeout_ms": 10000, 00:18:51.180 "arbitration_burst": 0, 00:18:51.180 "low_priority_weight": 0, 00:18:51.180 "medium_priority_weight": 0, 00:18:51.180 "high_priority_weight": 0, 00:18:51.180 "nvme_adminq_poll_period_us": 10000, 00:18:51.180 "nvme_ioq_poll_period_us": 0, 00:18:51.180 "io_queue_requests": 0, 00:18:51.180 "delay_cmd_submit": true, 00:18:51.180 "transport_retry_count": 4, 00:18:51.180 "bdev_retry_count": 3, 00:18:51.180 "transport_ack_timeout": 0, 00:18:51.180 "ctrlr_loss_timeout_sec": 0, 00:18:51.180 "reconnect_delay_sec": 0, 00:18:51.180 "fast_io_fail_timeout_sec": 0, 00:18:51.180 "disable_auto_failback": false, 00:18:51.180 "generate_uuids": false, 00:18:51.180 "transport_tos": 0, 00:18:51.180 "nvme_error_stat": false, 00:18:51.180 "rdma_srq_size": 0, 00:18:51.180 "io_path_stat": false, 00:18:51.180 "allow_accel_sequence": false, 00:18:51.180 "rdma_max_cq_size": 0, 00:18:51.180 "rdma_cm_event_timeout_ms": 0, 00:18:51.180 "dhchap_digests": [ 00:18:51.180 "sha256", 00:18:51.180 "sha384", 00:18:51.180 "sha512" 00:18:51.180 ], 00:18:51.180 "dhchap_dhgroups": [ 00:18:51.180 "null", 00:18:51.180 "ffdhe2048", 00:18:51.180 "ffdhe3072", 00:18:51.180 "ffdhe4096", 00:18:51.180 "ffdhe6144", 00:18:51.180 "ffdhe8192" 00:18:51.180 ] 00:18:51.180 } 00:18:51.180 }, 00:18:51.180 { 00:18:51.180 "method": "bdev_nvme_set_hotplug", 00:18:51.180 "params": { 00:18:51.180 "period_us": 100000, 00:18:51.180 "enable": false 00:18:51.180 } 00:18:51.180 }, 00:18:51.180 { 00:18:51.180 "method": "bdev_malloc_create", 00:18:51.180 "params": { 00:18:51.180 "name": "malloc0", 00:18:51.180 "num_blocks": 8192, 00:18:51.180 "block_size": 4096, 00:18:51.180 "physical_block_size": 4096, 00:18:51.180 "uuid": "d73a3cb4-84f3-40f1-a603-2176694a2ec0", 00:18:51.180 "optimal_io_boundary": 0, 00:18:51.180 "md_size": 0, 00:18:51.180 "dif_type": 0, 00:18:51.180 "dif_is_head_of_md": false, 00:18:51.180 "dif_pi_format": 0 00:18:51.180 } 00:18:51.180 }, 00:18:51.180 { 00:18:51.180 "method": "bdev_wait_for_examine" 00:18:51.180 } 00:18:51.180 ] 00:18:51.180 }, 00:18:51.180 { 00:18:51.180 "subsystem": "nbd", 00:18:51.180 "config": [] 00:18:51.180 }, 00:18:51.180 { 00:18:51.180 "subsystem": "scheduler", 00:18:51.180 "config": [ 00:18:51.180 { 00:18:51.180 "method": "framework_set_scheduler", 00:18:51.180 "params": { 00:18:51.180 "name": "static" 00:18:51.180 } 00:18:51.180 } 00:18:51.180 ] 00:18:51.180 }, 00:18:51.180 { 00:18:51.180 "subsystem": "nvmf", 00:18:51.180 "config": [ 00:18:51.180 { 00:18:51.180 "method": "nvmf_set_config", 00:18:51.180 "params": { 00:18:51.180 "discovery_filter": "match_any", 00:18:51.180 "admin_cmd_passthru": { 00:18:51.180 "identify_ctrlr": false 00:18:51.180 }, 00:18:51.180 "dhchap_digests": [ 00:18:51.180 "sha256", 00:18:51.180 "sha384", 00:18:51.180 "sha512" 00:18:51.180 ], 00:18:51.180 "dhchap_dhgroups": [ 00:18:51.180 "null", 00:18:51.180 "ffdhe2048", 00:18:51.180 "ffdhe3072", 00:18:51.180 "ffdhe4096", 00:18:51.180 "ffdhe6144", 00:18:51.180 "ffdhe8192" 00:18:51.180 ] 00:18:51.180 } 00:18:51.180 }, 00:18:51.180 { 00:18:51.180 "method": "nvmf_set_max_subsystems", 00:18:51.180 "params": { 00:18:51.180 "max_subsystems": 1024 00:18:51.180 } 00:18:51.180 }, 00:18:51.180 { 00:18:51.180 "method": "nvmf_set_crdt", 00:18:51.180 "params": { 00:18:51.180 "crdt1": 0, 00:18:51.180 "crdt2": 0, 00:18:51.180 "crdt3": 0 00:18:51.180 } 00:18:51.180 }, 00:18:51.180 { 00:18:51.180 "method": "nvmf_create_transport", 00:18:51.180 "params": { 00:18:51.180 "trtype": "TCP", 00:18:51.180 "max_queue_depth": 128, 00:18:51.180 "max_io_qpairs_per_ctrlr": 127, 00:18:51.180 "in_capsule_data_size": 4096, 00:18:51.180 "max_io_size": 131072, 00:18:51.180 "io_unit_size": 131072, 00:18:51.180 "max_aq_depth": 128, 00:18:51.180 "num_shared_buffers": 511, 00:18:51.180 "buf_cache_size": 4294967295, 00:18:51.180 "dif_insert_or_strip": false, 00:18:51.180 "zcopy": false, 00:18:51.180 "c2h_success": false, 00:18:51.180 "sock_priority": 0, 00:18:51.180 "abort_timeout_sec": 1, 00:18:51.180 "ack_timeout": 0, 00:18:51.180 "data_wr_pool_size": 0 00:18:51.180 } 00:18:51.180 }, 00:18:51.180 { 00:18:51.180 "method": "nvmf_create_subsystem", 00:18:51.180 "params": { 00:18:51.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.180 "allow_any_host": false, 00:18:51.180 "serial_number": "SPDK00000000000001", 00:18:51.180 "model_number": "SPDK bdev Controller", 00:18:51.180 "max_namespaces": 10, 00:18:51.180 "min_cntlid": 1, 00:18:51.180 "max_cntlid": 65519, 00:18:51.180 "ana_reporting": false 00:18:51.180 } 00:18:51.180 }, 00:18:51.180 { 00:18:51.180 "method": "nvmf_subsystem_add_host", 00:18:51.180 "params": { 00:18:51.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.180 "host": "nqn.2016-06.io.spdk:host1", 00:18:51.180 "psk": "key0" 00:18:51.180 } 00:18:51.180 }, 00:18:51.180 { 00:18:51.180 "method": "nvmf_subsystem_add_ns", 00:18:51.180 "params": { 00:18:51.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.180 "namespace": { 00:18:51.180 "nsid": 1, 00:18:51.180 "bdev_name": "malloc0", 00:18:51.180 "nguid": "D73A3CB484F340F1A6032176694A2EC0", 00:18:51.180 "uuid": "d73a3cb4-84f3-40f1-a603-2176694a2ec0", 00:18:51.180 "no_auto_visible": false 00:18:51.180 } 00:18:51.180 } 00:18:51.180 }, 00:18:51.180 { 00:18:51.180 "method": "nvmf_subsystem_add_listener", 00:18:51.180 "params": { 00:18:51.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.180 "listen_address": { 00:18:51.180 "trtype": "TCP", 00:18:51.180 "adrfam": "IPv4", 00:18:51.180 "traddr": "10.0.0.3", 00:18:51.180 "trsvcid": "4420" 00:18:51.180 }, 00:18:51.180 "secure_channel": true 00:18:51.180 } 00:18:51.180 } 00:18:51.180 ] 00:18:51.180 } 00:18:51.180 ] 00:18:51.180 }' 00:18:51.180 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:51.439 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:51.439 "subsystems": [ 00:18:51.439 { 00:18:51.439 "subsystem": "keyring", 00:18:51.439 "config": [ 00:18:51.439 { 00:18:51.439 "method": "keyring_file_add_key", 00:18:51.439 "params": { 00:18:51.439 "name": "key0", 00:18:51.439 "path": "/tmp/tmp.H8YwNrhVaH" 00:18:51.439 } 00:18:51.439 } 00:18:51.439 ] 00:18:51.439 }, 00:18:51.439 { 00:18:51.439 "subsystem": "iobuf", 00:18:51.439 "config": [ 00:18:51.439 { 00:18:51.439 "method": "iobuf_set_options", 00:18:51.439 "params": { 00:18:51.439 "small_pool_count": 8192, 00:18:51.439 "large_pool_count": 1024, 00:18:51.439 "small_bufsize": 8192, 00:18:51.439 "large_bufsize": 135168, 00:18:51.439 "enable_numa": false 00:18:51.439 } 00:18:51.439 } 00:18:51.439 ] 00:18:51.439 }, 00:18:51.439 { 00:18:51.439 "subsystem": "sock", 00:18:51.439 "config": [ 00:18:51.439 { 00:18:51.439 "method": "sock_set_default_impl", 00:18:51.439 "params": { 00:18:51.439 "impl_name": "uring" 00:18:51.439 } 00:18:51.439 }, 00:18:51.439 { 00:18:51.439 "method": "sock_impl_set_options", 00:18:51.439 "params": { 00:18:51.439 "impl_name": "ssl", 00:18:51.439 "recv_buf_size": 4096, 00:18:51.439 "send_buf_size": 4096, 00:18:51.439 "enable_recv_pipe": true, 00:18:51.439 "enable_quickack": false, 00:18:51.439 "enable_placement_id": 0, 00:18:51.439 "enable_zerocopy_send_server": true, 00:18:51.439 "enable_zerocopy_send_client": false, 00:18:51.439 "zerocopy_threshold": 0, 00:18:51.439 "tls_version": 0, 00:18:51.439 "enable_ktls": false 00:18:51.439 } 00:18:51.439 }, 00:18:51.439 { 00:18:51.439 "method": "sock_impl_set_options", 00:18:51.439 "params": { 00:18:51.439 "impl_name": "posix", 00:18:51.439 "recv_buf_size": 2097152, 00:18:51.439 "send_buf_size": 2097152, 00:18:51.439 "enable_recv_pipe": true, 00:18:51.439 "enable_quickack": false, 00:18:51.439 "enable_placement_id": 0, 00:18:51.439 "enable_zerocopy_send_server": true, 00:18:51.439 "enable_zerocopy_send_client": false, 00:18:51.439 "zerocopy_threshold": 0, 00:18:51.439 "tls_version": 0, 00:18:51.439 "enable_ktls": false 00:18:51.439 } 00:18:51.439 }, 00:18:51.439 { 00:18:51.439 "method": "sock_impl_set_options", 00:18:51.439 "params": { 00:18:51.439 "impl_name": "uring", 00:18:51.439 "recv_buf_size": 2097152, 00:18:51.439 "send_buf_size": 2097152, 00:18:51.439 "enable_recv_pipe": true, 00:18:51.439 "enable_quickack": false, 00:18:51.439 "enable_placement_id": 0, 00:18:51.439 "enable_zerocopy_send_server": false, 00:18:51.439 "enable_zerocopy_send_client": false, 00:18:51.439 "zerocopy_threshold": 0, 00:18:51.439 "tls_version": 0, 00:18:51.439 "enable_ktls": false 00:18:51.439 } 00:18:51.439 } 00:18:51.439 ] 00:18:51.439 }, 00:18:51.439 { 00:18:51.439 "subsystem": "vmd", 00:18:51.439 "config": [] 00:18:51.439 }, 00:18:51.439 { 00:18:51.439 "subsystem": "accel", 00:18:51.439 "config": [ 00:18:51.439 { 00:18:51.439 "method": "accel_set_options", 00:18:51.439 "params": { 00:18:51.439 "small_cache_size": 128, 00:18:51.439 "large_cache_size": 16, 00:18:51.439 "task_count": 2048, 00:18:51.439 "sequence_count": 2048, 00:18:51.439 "buf_count": 2048 00:18:51.439 } 00:18:51.439 } 00:18:51.439 ] 00:18:51.439 }, 00:18:51.439 { 00:18:51.439 "subsystem": "bdev", 00:18:51.439 "config": [ 00:18:51.439 { 00:18:51.439 "method": "bdev_set_options", 00:18:51.439 "params": { 00:18:51.439 "bdev_io_pool_size": 65535, 00:18:51.439 "bdev_io_cache_size": 256, 00:18:51.439 "bdev_auto_examine": true, 00:18:51.439 "iobuf_small_cache_size": 128, 00:18:51.439 "iobuf_large_cache_size": 16 00:18:51.439 } 00:18:51.439 }, 00:18:51.439 { 00:18:51.439 "method": "bdev_raid_set_options", 00:18:51.439 "params": { 00:18:51.439 "process_window_size_kb": 1024, 00:18:51.439 "process_max_bandwidth_mb_sec": 0 00:18:51.439 } 00:18:51.439 }, 00:18:51.439 { 00:18:51.439 "method": "bdev_iscsi_set_options", 00:18:51.439 "params": { 00:18:51.439 "timeout_sec": 30 00:18:51.439 } 00:18:51.439 }, 00:18:51.439 { 00:18:51.439 "method": "bdev_nvme_set_options", 00:18:51.439 "params": { 00:18:51.439 "action_on_timeout": "none", 00:18:51.439 "timeout_us": 0, 00:18:51.439 "timeout_admin_us": 0, 00:18:51.439 "keep_alive_timeout_ms": 10000, 00:18:51.439 "arbitration_burst": 0, 00:18:51.439 "low_priority_weight": 0, 00:18:51.439 "medium_priority_weight": 0, 00:18:51.439 "high_priority_weight": 0, 00:18:51.439 "nvme_adminq_poll_period_us": 10000, 00:18:51.439 "nvme_ioq_poll_period_us": 0, 00:18:51.439 "io_queue_requests": 512, 00:18:51.439 "delay_cmd_submit": true, 00:18:51.439 "transport_retry_count": 4, 00:18:51.439 "bdev_retry_count": 3, 00:18:51.439 "transport_ack_timeout": 0, 00:18:51.439 "ctrlr_loss_timeout_sec": 0, 00:18:51.439 "reconnect_delay_sec": 0, 00:18:51.439 "fast_io_fail_timeout_sec": 0, 00:18:51.439 "disable_auto_failback": false, 00:18:51.439 "generate_uuids": false, 00:18:51.439 "transport_tos": 0, 00:18:51.439 "nvme_error_stat": false, 00:18:51.439 "rdma_srq_size": 0, 00:18:51.439 "io_path_stat": false, 00:18:51.439 "allow_accel_sequence": false, 00:18:51.439 "rdma_max_cq_size": 0, 00:18:51.439 "rdma_cm_event_timeout_ms": 0, 00:18:51.439 "dhchap_digests": [ 00:18:51.439 "sha256", 00:18:51.439 "sha384", 00:18:51.439 "sha512" 00:18:51.439 ], 00:18:51.439 "dhchap_dhgroups": [ 00:18:51.439 "null", 00:18:51.439 "ffdhe2048", 00:18:51.439 "ffdhe3072", 00:18:51.439 "ffdhe4096", 00:18:51.439 "ffdhe6144", 00:18:51.439 "ffdhe8192" 00:18:51.440 ] 00:18:51.440 } 00:18:51.440 }, 00:18:51.440 { 00:18:51.440 "method": "bdev_nvme_attach_controller", 00:18:51.440 "params": { 00:18:51.440 "name": "TLSTEST", 00:18:51.440 "trtype": "TCP", 00:18:51.440 "adrfam": "IPv4", 00:18:51.440 "traddr": "10.0.0.3", 00:18:51.440 "trsvcid": "4420", 00:18:51.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.440 "prchk_reftag": false, 00:18:51.440 "prchk_guard": false, 00:18:51.440 "ctrlr_loss_timeout_sec": 0, 00:18:51.440 "reconnect_delay_sec": 0, 00:18:51.440 "fast_io_fail_timeout_sec": 0, 00:18:51.440 "psk": "key0", 00:18:51.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:51.440 "hdgst": false, 00:18:51.440 "ddgst": false, 00:18:51.440 "multipath": "multipath" 00:18:51.440 } 00:18:51.440 }, 00:18:51.440 { 00:18:51.440 "method": "bdev_nvme_set_hotplug", 00:18:51.440 "params": { 00:18:51.440 "period_us": 100000, 00:18:51.440 "enable": false 00:18:51.440 } 00:18:51.440 }, 00:18:51.440 { 00:18:51.440 "method": "bdev_wait_for_examine" 00:18:51.440 } 00:18:51.440 ] 00:18:51.440 }, 00:18:51.440 { 00:18:51.440 "subsystem": "nbd", 00:18:51.440 "config": [] 00:18:51.440 } 00:18:51.440 ] 00:18:51.440 }' 00:18:51.440 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 70649 00:18:51.440 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 70649 ']' 00:18:51.440 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 70649 00:18:51.440 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:51.440 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:51.440 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70649 00:18:51.440 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:51.440 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:51.440 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70649' 00:18:51.440 killing process with pid 70649 00:18:51.440 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 70649 00:18:51.440 Received shutdown signal, test time was about 10.000000 seconds 00:18:51.440 00:18:51.440 Latency(us) 00:18:51.440 [2024-11-04T14:45:00.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.440 [2024-11-04T14:45:00.580Z] =================================================================================================================== 00:18:51.440 [2024-11-04T14:45:00.580Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:51.440 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 70649 00:18:51.698 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 70598 00:18:51.698 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 70598 ']' 00:18:51.698 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 70598 00:18:51.698 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:51.698 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:51.698 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70598 00:18:51.698 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:51.698 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:51.698 killing process with pid 70598 00:18:51.698 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70598' 00:18:51.698 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 70598 00:18:51.698 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 70598 00:18:51.698 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:51.698 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:51.698 "subsystems": [ 00:18:51.698 { 00:18:51.698 "subsystem": "keyring", 00:18:51.698 "config": [ 00:18:51.698 { 00:18:51.698 "method": "keyring_file_add_key", 00:18:51.698 "params": { 00:18:51.698 "name": "key0", 00:18:51.698 "path": "/tmp/tmp.H8YwNrhVaH" 00:18:51.698 } 00:18:51.698 } 00:18:51.698 ] 00:18:51.698 }, 00:18:51.698 { 00:18:51.698 "subsystem": "iobuf", 00:18:51.698 "config": [ 00:18:51.698 { 00:18:51.698 "method": "iobuf_set_options", 00:18:51.698 "params": { 00:18:51.698 "small_pool_count": 8192, 00:18:51.698 "large_pool_count": 1024, 00:18:51.698 "small_bufsize": 8192, 00:18:51.698 "large_bufsize": 135168, 00:18:51.698 "enable_numa": false 00:18:51.698 } 00:18:51.698 } 00:18:51.698 ] 00:18:51.698 }, 00:18:51.698 { 00:18:51.698 "subsystem": "sock", 00:18:51.698 "config": [ 00:18:51.698 { 00:18:51.698 "method": "sock_set_default_impl", 00:18:51.698 "params": { 00:18:51.698 "impl_name": "uring" 00:18:51.698 } 00:18:51.698 }, 00:18:51.698 { 00:18:51.698 "method": "sock_impl_set_options", 00:18:51.698 "params": { 00:18:51.698 "impl_name": "ssl", 00:18:51.698 "recv_buf_size": 4096, 00:18:51.698 "send_buf_size": 4096, 00:18:51.698 "enable_recv_pipe": true, 00:18:51.698 "enable_quickack": false, 00:18:51.698 "enable_placement_id": 0, 00:18:51.698 "enable_zerocopy_send_server": true, 00:18:51.698 "enable_zerocopy_send_client": false, 00:18:51.698 "zerocopy_threshold": 0, 00:18:51.698 "tls_version": 0, 00:18:51.698 "enable_ktls": false 00:18:51.698 } 00:18:51.698 }, 00:18:51.698 { 00:18:51.698 "method": "sock_impl_set_options", 00:18:51.698 "params": { 00:18:51.698 "impl_name": "posix", 00:18:51.698 "recv_buf_size": 2097152, 00:18:51.698 "send_buf_size": 2097152, 00:18:51.698 "enable_recv_pipe": true, 00:18:51.698 "enable_quickack": false, 00:18:51.698 "enable_placement_id": 0, 00:18:51.698 "enable_zerocopy_send_server": true, 00:18:51.698 "enable_zerocopy_send_client": false, 00:18:51.698 "zerocopy_threshold": 0, 00:18:51.698 "tls_version": 0, 00:18:51.698 "enable_ktls": false 00:18:51.698 } 00:18:51.698 }, 00:18:51.698 { 00:18:51.698 "method": "sock_impl_set_options", 00:18:51.698 "params": { 00:18:51.698 "impl_name": "uring", 00:18:51.698 "recv_buf_size": 2097152, 00:18:51.698 "send_buf_size": 2097152, 00:18:51.698 "enable_recv_pipe": true, 00:18:51.698 "enable_quickack": false, 00:18:51.698 "enable_placement_id": 0, 00:18:51.698 "enable_zerocopy_send_server": false, 00:18:51.698 "enable_zerocopy_send_client": false, 00:18:51.698 "zerocopy_threshold": 0, 00:18:51.698 "tls_version": 0, 00:18:51.698 "enable_ktls": false 00:18:51.698 } 00:18:51.698 } 00:18:51.698 ] 00:18:51.698 }, 00:18:51.698 { 00:18:51.698 "subsystem": "vmd", 00:18:51.698 "config": [] 00:18:51.698 }, 00:18:51.698 { 00:18:51.698 "subsystem": "accel", 00:18:51.698 "config": [ 00:18:51.698 { 00:18:51.698 "method": "accel_set_options", 00:18:51.698 "params": { 00:18:51.698 "small_cache_size": 128, 00:18:51.698 "large_cache_size": 16, 00:18:51.698 "task_count": 2048, 00:18:51.698 "sequence_count": 2048, 00:18:51.698 "buf_count": 2048 00:18:51.698 } 00:18:51.698 } 00:18:51.698 ] 00:18:51.698 }, 00:18:51.698 { 00:18:51.698 "subsystem": "bdev", 00:18:51.698 "config": [ 00:18:51.698 { 00:18:51.698 "method": "bdev_set_options", 00:18:51.698 "params": { 00:18:51.698 "bdev_io_pool_size": 65535, 00:18:51.698 "bdev_io_cache_size": 256, 00:18:51.698 "bdev_auto_examine": true, 00:18:51.698 "iobuf_small_cache_size": 128, 00:18:51.698 "iobuf_large_cache_size": 16 00:18:51.698 } 00:18:51.698 }, 00:18:51.698 { 00:18:51.698 "method": "bdev_raid_set_options", 00:18:51.698 "params": { 00:18:51.698 "process_window_size_kb": 1024, 00:18:51.698 "process_max_bandwidth_mb_sec": 0 00:18:51.698 } 00:18:51.698 }, 00:18:51.698 { 00:18:51.698 "method": "bdev_iscsi_set_options", 00:18:51.698 "params": { 00:18:51.698 "timeout_sec": 30 00:18:51.698 } 00:18:51.698 }, 00:18:51.698 { 00:18:51.698 "method": "bdev_nvme_set_options", 00:18:51.698 "params": { 00:18:51.698 "action_on_timeout": "none", 00:18:51.698 "timeout_us": 0, 00:18:51.698 "timeout_admin_us": 0, 00:18:51.698 "keep_alive_timeout_ms": 10000, 00:18:51.698 "arbitration_burst": 0, 00:18:51.698 "low_priority_weight": 0, 00:18:51.698 "medium_priority_weight": 0, 00:18:51.698 "high_priority_weight": 0, 00:18:51.698 "nvme_adminq_poll_period_us": 10000, 00:18:51.698 "nvme_ioq_poll_period_us": 0, 00:18:51.698 "io_queue_requests": 0, 00:18:51.698 "delay_cmd_submit": true, 00:18:51.698 "transport_retry_count": 4, 00:18:51.698 "bdev_retry_count": 3, 00:18:51.698 "transport_ack_timeout": 0, 00:18:51.698 "ctrlr_loss_timeout_sec": 0, 00:18:51.698 "reconnect_delay_sec": 0, 00:18:51.698 "fast_io_fail_timeout_sec": 0, 00:18:51.698 "disable_auto_failback": false, 00:18:51.698 "generate_uuids": false, 00:18:51.698 "transport_tos": 0, 00:18:51.698 "nvme_error_stat": false, 00:18:51.698 "rdma_srq_size": 0, 00:18:51.698 "io_path_stat": false, 00:18:51.698 "allow_accel_sequence": false, 00:18:51.698 "rdma_max_cq_size": 0, 00:18:51.698 "rdma_cm_event_timeout_ms": 0, 00:18:51.698 "dhchap_digests": [ 00:18:51.698 "sha256", 00:18:51.698 "sha384", 00:18:51.698 "sha512" 00:18:51.698 ], 00:18:51.698 "dhchap_dhgroups": [ 00:18:51.698 "null", 00:18:51.698 "ffdhe2048", 00:18:51.698 "ffdhe3072", 00:18:51.698 "ffdhe4096", 00:18:51.698 "ffdhe6144", 00:18:51.698 "ffdhe8192" 00:18:51.698 ] 00:18:51.698 } 00:18:51.698 }, 00:18:51.698 { 00:18:51.698 "method": "bdev_nvme_set_hotplug", 00:18:51.698 "params": { 00:18:51.698 "period_us": 100000, 00:18:51.698 "enable": false 00:18:51.698 } 00:18:51.698 }, 00:18:51.698 { 00:18:51.698 "method": "bdev_malloc_create", 00:18:51.698 "params": { 00:18:51.698 "name": "malloc0", 00:18:51.698 "num_blocks": 8192, 00:18:51.698 "block_size": 4096, 00:18:51.698 "physical_block_size": 4096, 00:18:51.698 "uuid": "d73a3cb4-84f3-40f1-a603-2176694a2ec0", 00:18:51.698 "optimal_io_boundary": 0, 00:18:51.698 "md_size": 0, 00:18:51.698 "dif_type": 0, 00:18:51.698 "dif_is_head_of_md": false, 00:18:51.698 "dif_pi_format": 0 00:18:51.698 } 00:18:51.698 }, 00:18:51.698 { 00:18:51.698 "method": "bdev_wait_for_examine" 00:18:51.698 } 00:18:51.698 ] 00:18:51.698 }, 00:18:51.698 { 00:18:51.698 "subsystem": "nbd", 00:18:51.698 "config": [] 00:18:51.698 }, 00:18:51.698 { 00:18:51.699 "subsystem": "scheduler", 00:18:51.699 "config": [ 00:18:51.699 { 00:18:51.699 "method": "framework_set_scheduler", 00:18:51.699 "params": { 00:18:51.699 "name": "static" 00:18:51.699 } 00:18:51.699 } 00:18:51.699 ] 00:18:51.699 }, 00:18:51.699 { 00:18:51.699 "subsystem": "nvmf", 00:18:51.699 "config": [ 00:18:51.699 { 00:18:51.699 "method": "nvmf_set_config", 00:18:51.699 "params": { 00:18:51.699 "discovery_filter": "match_any", 00:18:51.699 "admin_cmd_passthru": { 00:18:51.699 "identify_ctrlr": false 00:18:51.699 }, 00:18:51.699 "dhchap_digests": [ 00:18:51.699 "sha256", 00:18:51.699 "sha384", 00:18:51.699 "sha512" 00:18:51.699 ], 00:18:51.699 "dhchap_dhgroups": [ 00:18:51.699 "null", 00:18:51.699 "ffdhe2048", 00:18:51.699 "ffdhe3072", 00:18:51.699 "ffdhe4096", 00:18:51.699 "ffdhe6144", 00:18:51.699 "ffdhe8192" 00:18:51.699 ] 00:18:51.699 } 00:18:51.699 }, 00:18:51.699 { 00:18:51.699 "method": "nvmf_set_max_subsystems", 00:18:51.699 "params": { 00:18:51.699 "max_subsystems": 1024 00:18:51.699 } 00:18:51.699 }, 00:18:51.699 { 00:18:51.699 "method": "nvmf_set_crdt", 00:18:51.699 "params": { 00:18:51.699 "crdt1": 0, 00:18:51.699 "crdt2": 0, 00:18:51.699 "crdt3": 0 00:18:51.699 } 00:18:51.699 }, 00:18:51.699 { 00:18:51.699 "method": "nvmf_create_transport", 00:18:51.699 "params": { 00:18:51.699 "trtype": "TCP", 00:18:51.699 "max_queue_depth": 128, 00:18:51.699 "max_io_qpairs_per_ctrlr": 127, 00:18:51.699 "in_capsule_data_size": 4096, 00:18:51.699 "max_io_size": 131072, 00:18:51.699 "io_unit_size": 131072, 00:18:51.699 "max_aq_depth": 128, 00:18:51.699 "num_shared_buffers": 511, 00:18:51.699 "buf_cache_size": 4294967295, 00:18:51.699 "dif_insert_or_strip": false, 00:18:51.699 "zcopy": false, 00:18:51.699 "c2h_success": false, 00:18:51.699 "sock_priority": 0, 00:18:51.699 "abort_timeout_sec": 1, 00:18:51.699 "ack_timeout": 0, 00:18:51.699 "data_wr_pool_size": 0 00:18:51.699 } 00:18:51.699 }, 00:18:51.699 { 00:18:51.699 "method": "nvmf_create_subsystem", 00:18:51.699 "params": { 00:18:51.699 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.699 "allow_any_host": false, 00:18:51.699 "serial_number": "SPDK00000000000001", 00:18:51.699 "model_number": "SPDK bdev Controller", 00:18:51.699 "max_namespaces": 10, 00:18:51.699 "min_cntlid": 1, 00:18:51.699 "max_cntlid": 65519, 00:18:51.699 "ana_reporting": false 00:18:51.699 } 00:18:51.699 }, 00:18:51.699 { 00:18:51.699 "method": "nvmf_subsystem_add_host", 00:18:51.699 "params": { 00:18:51.699 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.699 "host": "nqn.2016-06.io.spdk:host1", 00:18:51.699 "psk": "key0" 00:18:51.699 } 00:18:51.699 }, 00:18:51.699 { 00:18:51.699 "method": "nvmf_subsystem_add_ns", 00:18:51.699 "params": { 00:18:51.699 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.699 "namespace": { 00:18:51.699 "nsid": 1, 00:18:51.699 "bdev_name": "malloc0", 00:18:51.699 "nguid": "D73A3CB484F340F1A6032176694A2EC0", 00:18:51.699 "uuid": "d73a3cb4-84f3-40f1-a603-2176694a2ec0", 00:18:51.699 "no_auto_visible": false 00:18:51.699 } 00:18:51.699 } 00:18:51.699 }, 00:18:51.699 { 00:18:51.699 "method": "nvmf_subsystem_add_listener", 00:18:51.699 "params": { 00:18:51.699 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.699 "listen_address": { 00:18:51.699 "trtype": "TCP", 00:18:51.699 "adrfam": "IPv4", 00:18:51.699 "traddr": "10.0.0.3", 00:18:51.699 "trsvcid": "4420" 00:18:51.699 }, 00:18:51.699 "secure_channel": true 00:18:51.699 } 00:18:51.699 } 00:18:51.699 ] 00:18:51.699 } 00:18:51.699 ] 00:18:51.699 }' 00:18:51.699 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:51.699 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:51.699 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.699 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70693 00:18:51.699 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:51.699 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70693 00:18:51.699 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 70693 ']' 00:18:51.699 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.699 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:51.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.699 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.699 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:51.699 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.699 [2024-11-04 14:45:00.788362] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:18:51.699 [2024-11-04 14:45:00.788446] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.957 [2024-11-04 14:45:00.933765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.957 [2024-11-04 14:45:00.971576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.957 [2024-11-04 14:45:00.971655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.957 [2024-11-04 14:45:00.971669] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.957 [2024-11-04 14:45:00.971678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.957 [2024-11-04 14:45:00.971686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.957 [2024-11-04 14:45:00.972094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.220 [2024-11-04 14:45:01.116707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:52.220 [2024-11-04 14:45:01.178630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.220 [2024-11-04 14:45:01.210530] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:52.220 [2024-11-04 14:45:01.210690] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:52.492 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:52.492 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:52.492 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:52.492 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:52.492 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.492 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.492 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=70725 00:18:52.492 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 70725 /var/tmp/bdevperf.sock 00:18:52.492 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 70725 ']' 00:18:52.492 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:52.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:52.492 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:52.492 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:52.492 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:52.492 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.492 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:52.492 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:52.492 "subsystems": [ 00:18:52.492 { 00:18:52.492 "subsystem": "keyring", 00:18:52.492 "config": [ 00:18:52.492 { 00:18:52.492 "method": "keyring_file_add_key", 00:18:52.492 "params": { 00:18:52.492 "name": "key0", 00:18:52.492 "path": "/tmp/tmp.H8YwNrhVaH" 00:18:52.492 } 00:18:52.492 } 00:18:52.492 ] 00:18:52.492 }, 00:18:52.492 { 00:18:52.492 "subsystem": "iobuf", 00:18:52.492 "config": [ 00:18:52.492 { 00:18:52.492 "method": "iobuf_set_options", 00:18:52.492 "params": { 00:18:52.492 "small_pool_count": 8192, 00:18:52.492 "large_pool_count": 1024, 00:18:52.492 "small_bufsize": 8192, 00:18:52.492 "large_bufsize": 135168, 00:18:52.492 "enable_numa": false 00:18:52.492 } 00:18:52.492 } 00:18:52.492 ] 00:18:52.492 }, 00:18:52.492 { 00:18:52.492 "subsystem": "sock", 00:18:52.492 "config": [ 00:18:52.492 { 00:18:52.492 "method": "sock_set_default_impl", 00:18:52.492 "params": { 00:18:52.492 "impl_name": "uring" 00:18:52.492 } 00:18:52.492 }, 00:18:52.492 { 00:18:52.492 "method": "sock_impl_set_options", 00:18:52.492 "params": { 00:18:52.492 "impl_name": "ssl", 00:18:52.492 "recv_buf_size": 4096, 00:18:52.492 "send_buf_size": 4096, 00:18:52.493 "enable_recv_pipe": true, 00:18:52.493 "enable_quickack": false, 00:18:52.493 "enable_placement_id": 0, 00:18:52.493 "enable_zerocopy_send_server": true, 00:18:52.493 "enable_zerocopy_send_client": false, 00:18:52.493 "zerocopy_threshold": 0, 00:18:52.493 "tls_version": 0, 00:18:52.493 "enable_ktls": false 00:18:52.493 } 00:18:52.493 }, 00:18:52.493 { 00:18:52.493 "method": "sock_impl_set_options", 00:18:52.493 "params": { 00:18:52.493 "impl_name": "posix", 00:18:52.493 "recv_buf_size": 2097152, 00:18:52.493 "send_buf_size": 2097152, 00:18:52.493 "enable_recv_pipe": true, 00:18:52.493 "enable_quickack": false, 00:18:52.493 "enable_placement_id": 0, 00:18:52.493 "enable_zerocopy_send_server": true, 00:18:52.493 "enable_zerocopy_send_client": false, 00:18:52.493 "zerocopy_threshold": 0, 00:18:52.493 "tls_version": 0, 00:18:52.493 "enable_ktls": false 00:18:52.493 } 00:18:52.493 }, 00:18:52.493 { 00:18:52.493 "method": "sock_impl_set_options", 00:18:52.493 "params": { 00:18:52.493 "impl_name": "uring", 00:18:52.493 "recv_buf_size": 2097152, 00:18:52.493 "send_buf_size": 2097152, 00:18:52.493 "enable_recv_pipe": true, 00:18:52.493 "enable_quickack": false, 00:18:52.493 "enable_placement_id": 0, 00:18:52.493 "enable_zerocopy_send_server": false, 00:18:52.493 "enable_zerocopy_send_client": false, 00:18:52.493 "zerocopy_threshold": 0, 00:18:52.493 "tls_version": 0, 00:18:52.493 "enable_ktls": false 00:18:52.493 } 00:18:52.493 } 00:18:52.493 ] 00:18:52.493 }, 00:18:52.493 { 00:18:52.493 "subsystem": "vmd", 00:18:52.493 "config": [] 00:18:52.493 }, 00:18:52.493 { 00:18:52.493 "subsystem": "accel", 00:18:52.493 "config": [ 00:18:52.493 { 00:18:52.493 "method": "accel_set_options", 00:18:52.493 "params": { 00:18:52.493 "small_cache_size": 128, 00:18:52.493 "large_cache_size": 16, 00:18:52.493 "task_count": 2048, 00:18:52.493 "sequence_count": 2048, 00:18:52.493 "buf_count": 2048 00:18:52.493 } 00:18:52.493 } 00:18:52.493 ] 00:18:52.493 }, 00:18:52.493 { 00:18:52.493 "subsystem": "bdev", 00:18:52.493 "config": [ 00:18:52.493 { 00:18:52.493 "method": "bdev_set_options", 00:18:52.493 "params": { 00:18:52.493 "bdev_io_pool_size": 65535, 00:18:52.493 "bdev_io_cache_size": 256, 00:18:52.493 "bdev_auto_examine": true, 00:18:52.493 "iobuf_small_cache_size": 128, 00:18:52.493 "iobuf_large_cache_size": 16 00:18:52.493 } 00:18:52.493 }, 00:18:52.493 { 00:18:52.493 "method": "bdev_raid_set_options", 00:18:52.493 "params": { 00:18:52.493 "process_window_size_kb": 1024, 00:18:52.493 "process_max_bandwidth_mb_sec": 0 00:18:52.493 } 00:18:52.493 }, 00:18:52.493 { 00:18:52.493 "method": "bdev_iscsi_set_options", 00:18:52.493 "params": { 00:18:52.493 "timeout_sec": 30 00:18:52.493 } 00:18:52.493 }, 00:18:52.493 { 00:18:52.493 "method": "bdev_nvme_set_options", 00:18:52.493 "params": { 00:18:52.493 "action_on_timeout": "none", 00:18:52.493 "timeout_us": 0, 00:18:52.493 "timeout_admin_us": 0, 00:18:52.493 "keep_alive_timeout_ms": 10000, 00:18:52.493 "arbitration_burst": 0, 00:18:52.493 "low_priority_weight": 0, 00:18:52.493 "medium_priority_weight": 0, 00:18:52.493 "high_priority_weight": 0, 00:18:52.493 "nvme_adminq_poll_period_us": 10000, 00:18:52.493 "nvme_ioq_poll_period_us": 0, 00:18:52.493 "io_queue_requests": 512, 00:18:52.493 "delay_cmd_submit": true, 00:18:52.493 "transport_retry_count": 4, 00:18:52.493 "bdev_retry_count": 3, 00:18:52.493 "transport_ack_timeout": 0, 00:18:52.493 "ctrlr_loss_timeout_sec": 0, 00:18:52.493 "reconnect_delay_sec": 0, 00:18:52.493 "fast_io_fail_timeout_sec": 0, 00:18:52.493 "disable_auto_failback": false, 00:18:52.493 "generate_uuids": false, 00:18:52.493 "transport_tos": 0, 00:18:52.493 "nvme_error_stat": false, 00:18:52.493 "rdma_srq_size": 0, 00:18:52.493 "io_path_stat": false, 00:18:52.493 "allow_accel_sequence": false, 00:18:52.493 "rdma_max_cq_size": 0, 00:18:52.493 "rdma_cm_event_timeout_ms": 0, 00:18:52.493 "dhchap_digests": [ 00:18:52.493 "sha256", 00:18:52.493 "sha384", 00:18:52.493 "sha512" 00:18:52.493 ], 00:18:52.493 "dhchap_dhgroups": [ 00:18:52.493 "null", 00:18:52.493 "ffdhe2048", 00:18:52.493 "ffdhe3072", 00:18:52.493 "ffdhe4096", 00:18:52.493 "ffdhe6144", 00:18:52.493 "ffdhe8192" 00:18:52.493 ] 00:18:52.493 } 00:18:52.493 }, 00:18:52.493 { 00:18:52.493 "method": "bdev_nvme_attach_controller", 00:18:52.493 "params": { 00:18:52.493 "name": "TLSTEST", 00:18:52.493 "trtype": "TCP", 00:18:52.493 "adrfam": "IPv4", 00:18:52.493 "traddr": "10.0.0.3", 00:18:52.493 "trsvcid": "4420", 00:18:52.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.493 "prchk_reftag": false, 00:18:52.493 "prchk_guard": false, 00:18:52.493 "ctrlr_loss_timeout_sec": 0, 00:18:52.493 "reconnect_delay_sec": 0, 00:18:52.493 "fast_io_fail_timeout_sec": 0, 00:18:52.493 "psk": "key0", 00:18:52.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.493 "hdgst": false, 00:18:52.493 "ddgst": false, 00:18:52.493 "multipath": "multipath" 00:18:52.493 } 00:18:52.493 }, 00:18:52.493 { 00:18:52.493 "method": "bdev_nvme_set_hotplug", 00:18:52.493 "params": { 00:18:52.493 "period_us": 100000, 00:18:52.493 "enable": false 00:18:52.493 } 00:18:52.493 }, 00:18:52.493 { 00:18:52.493 "method": "bdev_wait_for_examine" 00:18:52.493 } 00:18:52.493 ] 00:18:52.493 }, 00:18:52.493 { 00:18:52.493 "subsystem": "nbd", 00:18:52.493 "config": [] 00:18:52.493 } 00:18:52.493 ] 00:18:52.493 }' 00:18:52.751 [2024-11-04 14:45:01.662242] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:18:52.751 [2024-11-04 14:45:01.662308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70725 ] 00:18:52.751 [2024-11-04 14:45:01.799351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.751 [2024-11-04 14:45:01.838133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.009 [2024-11-04 14:45:01.949233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:53.009 [2024-11-04 14:45:01.985388] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:53.574 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:53.574 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:53.574 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:53.574 Running I/O for 10 seconds... 00:18:55.893 6318.00 IOPS, 24.68 MiB/s [2024-11-04T14:45:05.975Z] 6372.00 IOPS, 24.89 MiB/s [2024-11-04T14:45:06.937Z] 6399.33 IOPS, 25.00 MiB/s [2024-11-04T14:45:07.870Z] 6454.00 IOPS, 25.21 MiB/s [2024-11-04T14:45:08.802Z] 6599.60 IOPS, 25.78 MiB/s [2024-11-04T14:45:09.735Z] 6696.17 IOPS, 26.16 MiB/s [2024-11-04T14:45:10.667Z] 6766.00 IOPS, 26.43 MiB/s [2024-11-04T14:45:11.599Z] 6819.75 IOPS, 26.64 MiB/s [2024-11-04T14:45:12.971Z] 6860.33 IOPS, 26.80 MiB/s [2024-11-04T14:45:12.971Z] 6889.80 IOPS, 26.91 MiB/s 00:19:03.831 Latency(us) 00:19:03.831 [2024-11-04T14:45:12.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.831 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:03.831 Verification LBA range: start 0x0 length 0x2000 00:19:03.831 TLSTESTn1 : 10.01 6895.60 26.94 0.00 0.00 18532.08 3881.75 15930.29 00:19:03.831 [2024-11-04T14:45:12.971Z] =================================================================================================================== 00:19:03.831 [2024-11-04T14:45:12.971Z] Total : 6895.60 26.94 0.00 0.00 18532.08 3881.75 15930.29 00:19:03.831 { 00:19:03.831 "results": [ 00:19:03.831 { 00:19:03.831 "job": "TLSTESTn1", 00:19:03.831 "core_mask": "0x4", 00:19:03.831 "workload": "verify", 00:19:03.831 "status": "finished", 00:19:03.831 "verify_range": { 00:19:03.831 "start": 0, 00:19:03.831 "length": 8192 00:19:03.831 }, 00:19:03.831 "queue_depth": 128, 00:19:03.831 "io_size": 4096, 00:19:03.831 "runtime": 10.010158, 00:19:03.831 "iops": 6895.595454137687, 00:19:03.832 "mibps": 26.93591974272534, 00:19:03.832 "io_failed": 0, 00:19:03.832 "io_timeout": 0, 00:19:03.832 "avg_latency_us": 18532.077297072006, 00:19:03.832 "min_latency_us": 3881.7476923076924, 00:19:03.832 "max_latency_us": 15930.289230769231 00:19:03.832 } 00:19:03.832 ], 00:19:03.832 "core_count": 1 00:19:03.832 } 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 70725 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 70725 ']' 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 70725 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70725 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70725' 00:19:03.832 killing process with pid 70725 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 70725 00:19:03.832 Received shutdown signal, test time was about 10.000000 seconds 00:19:03.832 00:19:03.832 Latency(us) 00:19:03.832 [2024-11-04T14:45:12.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.832 [2024-11-04T14:45:12.972Z] =================================================================================================================== 00:19:03.832 [2024-11-04T14:45:12.972Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 70725 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 70693 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 70693 ']' 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 70693 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70693 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:03.832 killing process with pid 70693 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70693' 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 70693 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 70693 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70858 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70858 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 70858 ']' 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:03.832 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.832 [2024-11-04 14:45:12.917434] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:19:03.832 [2024-11-04 14:45:12.917503] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.090 [2024-11-04 14:45:13.057987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.090 [2024-11-04 14:45:13.092873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.090 [2024-11-04 14:45:13.092911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.090 [2024-11-04 14:45:13.092917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.090 [2024-11-04 14:45:13.092922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.090 [2024-11-04 14:45:13.092926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.090 [2024-11-04 14:45:13.093189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.090 [2024-11-04 14:45:13.123404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:04.656 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:04.656 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:04.656 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:04.656 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:04.656 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.656 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.656 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.H8YwNrhVaH 00:19:04.656 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.H8YwNrhVaH 00:19:04.656 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:04.914 [2024-11-04 14:45:14.006344] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.914 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:05.172 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:19:05.429 [2024-11-04 14:45:14.398414] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:05.429 [2024-11-04 14:45:14.398573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:05.429 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:05.688 malloc0 00:19:05.688 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:05.945 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.H8YwNrhVaH 00:19:05.945 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:06.204 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=70912 00:19:06.204 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:06.204 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:06.204 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 70912 /var/tmp/bdevperf.sock 00:19:06.204 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 70912 ']' 00:19:06.204 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:06.204 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:06.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:06.204 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:06.204 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:06.204 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.204 [2024-11-04 14:45:15.256620] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:19:06.204 [2024-11-04 14:45:15.256683] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70912 ] 00:19:06.462 [2024-11-04 14:45:15.391984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.462 [2024-11-04 14:45:15.428909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.462 [2024-11-04 14:45:15.460452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:07.028 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:07.028 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:07.028 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H8YwNrhVaH 00:19:07.286 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:07.544 [2024-11-04 14:45:16.495271] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:07.544 nvme0n1 00:19:07.544 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:07.544 Running I/O for 1 seconds... 00:19:08.925 6265.00 IOPS, 24.47 MiB/s 00:19:08.926 Latency(us) 00:19:08.926 [2024-11-04T14:45:18.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.926 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:08.926 Verification LBA range: start 0x0 length 0x2000 00:19:08.926 nvme0n1 : 1.01 6336.78 24.75 0.00 0.00 20092.11 2646.65 18955.03 00:19:08.926 [2024-11-04T14:45:18.066Z] =================================================================================================================== 00:19:08.926 [2024-11-04T14:45:18.066Z] Total : 6336.78 24.75 0.00 0.00 20092.11 2646.65 18955.03 00:19:08.926 { 00:19:08.926 "results": [ 00:19:08.926 { 00:19:08.926 "job": "nvme0n1", 00:19:08.926 "core_mask": "0x2", 00:19:08.926 "workload": "verify", 00:19:08.926 "status": "finished", 00:19:08.926 "verify_range": { 00:19:08.926 "start": 0, 00:19:08.926 "length": 8192 00:19:08.926 }, 00:19:08.926 "queue_depth": 128, 00:19:08.926 "io_size": 4096, 00:19:08.926 "runtime": 1.008872, 00:19:08.926 "iops": 6336.780087067536, 00:19:08.926 "mibps": 24.753047215107564, 00:19:08.926 "io_failed": 0, 00:19:08.926 "io_timeout": 0, 00:19:08.926 "avg_latency_us": 20092.105956274292, 00:19:08.926 "min_latency_us": 2646.646153846154, 00:19:08.926 "max_latency_us": 18955.027692307693 00:19:08.926 } 00:19:08.926 ], 00:19:08.926 "core_count": 1 00:19:08.926 } 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 70912 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 70912 ']' 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 70912 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70912 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:08.926 killing process with pid 70912 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70912' 00:19:08.926 Received shutdown signal, test time was about 1.000000 seconds 00:19:08.926 00:19:08.926 Latency(us) 00:19:08.926 [2024-11-04T14:45:18.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.926 [2024-11-04T14:45:18.066Z] =================================================================================================================== 00:19:08.926 [2024-11-04T14:45:18.066Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 70912 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 70912 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 70858 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 70858 ']' 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 70858 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70858 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:08.926 killing process with pid 70858 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70858' 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 70858 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 70858 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70959 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70959 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 70959 ']' 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:08.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:08.926 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.926 [2024-11-04 14:45:18.006720] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:19:08.926 [2024-11-04 14:45:18.006776] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.184 [2024-11-04 14:45:18.145402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.184 [2024-11-04 14:45:18.176696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.184 [2024-11-04 14:45:18.176739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.184 [2024-11-04 14:45:18.176746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.184 [2024-11-04 14:45:18.176751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.184 [2024-11-04 14:45:18.176755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.184 [2024-11-04 14:45:18.176987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.184 [2024-11-04 14:45:18.205427] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:09.749 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:09.749 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:09.749 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:09.749 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:09.749 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.006 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.006 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:10.006 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.006 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.006 [2024-11-04 14:45:18.915427] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.006 malloc0 00:19:10.006 [2024-11-04 14:45:18.941297] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:10.006 [2024-11-04 14:45:18.941420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:10.006 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.007 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=70992 00:19:10.007 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 70992 /var/tmp/bdevperf.sock 00:19:10.007 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 70992 ']' 00:19:10.007 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:10.007 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.007 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:10.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.007 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.007 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:10.007 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.007 [2024-11-04 14:45:19.006841] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:19:10.007 [2024-11-04 14:45:19.006908] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70992 ] 00:19:10.007 [2024-11-04 14:45:19.146273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.264 [2024-11-04 14:45:19.183251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.264 [2024-11-04 14:45:19.215390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:10.264 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:10.264 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:10.264 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H8YwNrhVaH 00:19:10.521 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:10.521 [2024-11-04 14:45:19.658157] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:10.777 nvme0n1 00:19:10.777 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:10.777 Running I/O for 1 seconds... 00:19:11.718 6300.00 IOPS, 24.61 MiB/s 00:19:11.718 Latency(us) 00:19:11.718 [2024-11-04T14:45:20.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.718 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:11.718 Verification LBA range: start 0x0 length 0x2000 00:19:11.718 nvme0n1 : 1.01 6368.44 24.88 0.00 0.00 19991.26 3075.15 15930.29 00:19:11.718 [2024-11-04T14:45:20.858Z] =================================================================================================================== 00:19:11.718 [2024-11-04T14:45:20.858Z] Total : 6368.44 24.88 0.00 0.00 19991.26 3075.15 15930.29 00:19:11.718 { 00:19:11.718 "results": [ 00:19:11.718 { 00:19:11.718 "job": "nvme0n1", 00:19:11.718 "core_mask": "0x2", 00:19:11.718 "workload": "verify", 00:19:11.718 "status": "finished", 00:19:11.718 "verify_range": { 00:19:11.718 "start": 0, 00:19:11.718 "length": 8192 00:19:11.718 }, 00:19:11.718 "queue_depth": 128, 00:19:11.718 "io_size": 4096, 00:19:11.718 "runtime": 1.009353, 00:19:11.718 "iops": 6368.436017924353, 00:19:11.718 "mibps": 24.876703195017004, 00:19:11.718 "io_failed": 0, 00:19:11.718 "io_timeout": 0, 00:19:11.718 "avg_latency_us": 19991.26314680963, 00:19:11.718 "min_latency_us": 3075.150769230769, 00:19:11.718 "max_latency_us": 15930.289230769231 00:19:11.718 } 00:19:11.718 ], 00:19:11.718 "core_count": 1 00:19:11.718 } 00:19:11.718 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:11.718 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.718 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.976 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.976 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:11.976 "subsystems": [ 00:19:11.976 { 00:19:11.976 "subsystem": "keyring", 00:19:11.976 "config": [ 00:19:11.976 { 00:19:11.976 "method": "keyring_file_add_key", 00:19:11.976 "params": { 00:19:11.976 "name": "key0", 00:19:11.976 "path": "/tmp/tmp.H8YwNrhVaH" 00:19:11.976 } 00:19:11.976 } 00:19:11.976 ] 00:19:11.976 }, 00:19:11.976 { 00:19:11.976 "subsystem": "iobuf", 00:19:11.976 "config": [ 00:19:11.976 { 00:19:11.976 "method": "iobuf_set_options", 00:19:11.976 "params": { 00:19:11.976 "small_pool_count": 8192, 00:19:11.976 "large_pool_count": 1024, 00:19:11.976 "small_bufsize": 8192, 00:19:11.976 "large_bufsize": 135168, 00:19:11.976 "enable_numa": false 00:19:11.976 } 00:19:11.976 } 00:19:11.976 ] 00:19:11.976 }, 00:19:11.976 { 00:19:11.976 "subsystem": "sock", 00:19:11.976 "config": [ 00:19:11.976 { 00:19:11.976 "method": "sock_set_default_impl", 00:19:11.976 "params": { 00:19:11.976 "impl_name": "uring" 00:19:11.976 } 00:19:11.976 }, 00:19:11.976 { 00:19:11.976 "method": "sock_impl_set_options", 00:19:11.976 "params": { 00:19:11.976 "impl_name": "ssl", 00:19:11.976 "recv_buf_size": 4096, 00:19:11.976 "send_buf_size": 4096, 00:19:11.976 "enable_recv_pipe": true, 00:19:11.976 "enable_quickack": false, 00:19:11.976 "enable_placement_id": 0, 00:19:11.976 "enable_zerocopy_send_server": true, 00:19:11.976 "enable_zerocopy_send_client": false, 00:19:11.976 "zerocopy_threshold": 0, 00:19:11.976 "tls_version": 0, 00:19:11.976 "enable_ktls": false 00:19:11.976 } 00:19:11.976 }, 00:19:11.976 { 00:19:11.976 "method": "sock_impl_set_options", 00:19:11.976 "params": { 00:19:11.976 "impl_name": "posix", 00:19:11.976 "recv_buf_size": 2097152, 00:19:11.976 "send_buf_size": 2097152, 00:19:11.976 "enable_recv_pipe": true, 00:19:11.976 "enable_quickack": false, 00:19:11.976 "enable_placement_id": 0, 00:19:11.976 "enable_zerocopy_send_server": true, 00:19:11.976 "enable_zerocopy_send_client": false, 00:19:11.976 "zerocopy_threshold": 0, 00:19:11.976 "tls_version": 0, 00:19:11.976 "enable_ktls": false 00:19:11.976 } 00:19:11.976 }, 00:19:11.976 { 00:19:11.976 "method": "sock_impl_set_options", 00:19:11.976 "params": { 00:19:11.976 "impl_name": "uring", 00:19:11.976 "recv_buf_size": 2097152, 00:19:11.976 "send_buf_size": 2097152, 00:19:11.976 "enable_recv_pipe": true, 00:19:11.976 "enable_quickack": false, 00:19:11.976 "enable_placement_id": 0, 00:19:11.976 "enable_zerocopy_send_server": false, 00:19:11.976 "enable_zerocopy_send_client": false, 00:19:11.976 "zerocopy_threshold": 0, 00:19:11.976 "tls_version": 0, 00:19:11.976 "enable_ktls": false 00:19:11.976 } 00:19:11.976 } 00:19:11.976 ] 00:19:11.976 }, 00:19:11.976 { 00:19:11.976 "subsystem": "vmd", 00:19:11.976 "config": [] 00:19:11.976 }, 00:19:11.976 { 00:19:11.976 "subsystem": "accel", 00:19:11.976 "config": [ 00:19:11.976 { 00:19:11.976 "method": "accel_set_options", 00:19:11.976 "params": { 00:19:11.976 "small_cache_size": 128, 00:19:11.976 "large_cache_size": 16, 00:19:11.976 "task_count": 2048, 00:19:11.976 "sequence_count": 2048, 00:19:11.976 "buf_count": 2048 00:19:11.976 } 00:19:11.976 } 00:19:11.976 ] 00:19:11.976 }, 00:19:11.976 { 00:19:11.976 "subsystem": "bdev", 00:19:11.976 "config": [ 00:19:11.976 { 00:19:11.976 "method": "bdev_set_options", 00:19:11.976 "params": { 00:19:11.976 "bdev_io_pool_size": 65535, 00:19:11.976 "bdev_io_cache_size": 256, 00:19:11.976 "bdev_auto_examine": true, 00:19:11.976 "iobuf_small_cache_size": 128, 00:19:11.976 "iobuf_large_cache_size": 16 00:19:11.976 } 00:19:11.976 }, 00:19:11.976 { 00:19:11.976 "method": "bdev_raid_set_options", 00:19:11.976 "params": { 00:19:11.976 "process_window_size_kb": 1024, 00:19:11.976 "process_max_bandwidth_mb_sec": 0 00:19:11.977 } 00:19:11.977 }, 00:19:11.977 { 00:19:11.977 "method": "bdev_iscsi_set_options", 00:19:11.977 "params": { 00:19:11.977 "timeout_sec": 30 00:19:11.977 } 00:19:11.977 }, 00:19:11.977 { 00:19:11.977 "method": "bdev_nvme_set_options", 00:19:11.977 "params": { 00:19:11.977 "action_on_timeout": "none", 00:19:11.977 "timeout_us": 0, 00:19:11.977 "timeout_admin_us": 0, 00:19:11.977 "keep_alive_timeout_ms": 10000, 00:19:11.977 "arbitration_burst": 0, 00:19:11.977 "low_priority_weight": 0, 00:19:11.977 "medium_priority_weight": 0, 00:19:11.977 "high_priority_weight": 0, 00:19:11.977 "nvme_adminq_poll_period_us": 10000, 00:19:11.977 "nvme_ioq_poll_period_us": 0, 00:19:11.977 "io_queue_requests": 0, 00:19:11.977 "delay_cmd_submit": true, 00:19:11.977 "transport_retry_count": 4, 00:19:11.977 "bdev_retry_count": 3, 00:19:11.977 "transport_ack_timeout": 0, 00:19:11.977 "ctrlr_loss_timeout_sec": 0, 00:19:11.977 "reconnect_delay_sec": 0, 00:19:11.977 "fast_io_fail_timeout_sec": 0, 00:19:11.977 "disable_auto_failback": false, 00:19:11.977 "generate_uuids": false, 00:19:11.977 "transport_tos": 0, 00:19:11.977 "nvme_error_stat": false, 00:19:11.977 "rdma_srq_size": 0, 00:19:11.977 "io_path_stat": false, 00:19:11.977 "allow_accel_sequence": false, 00:19:11.977 "rdma_max_cq_size": 0, 00:19:11.977 "rdma_cm_event_timeout_ms": 0, 00:19:11.977 "dhchap_digests": [ 00:19:11.977 "sha256", 00:19:11.977 "sha384", 00:19:11.977 "sha512" 00:19:11.977 ], 00:19:11.977 "dhchap_dhgroups": [ 00:19:11.977 "null", 00:19:11.977 "ffdhe2048", 00:19:11.977 "ffdhe3072", 00:19:11.977 "ffdhe4096", 00:19:11.977 "ffdhe6144", 00:19:11.977 "ffdhe8192" 00:19:11.977 ] 00:19:11.977 } 00:19:11.977 }, 00:19:11.977 { 00:19:11.977 "method": "bdev_nvme_set_hotplug", 00:19:11.977 "params": { 00:19:11.977 "period_us": 100000, 00:19:11.977 "enable": false 00:19:11.977 } 00:19:11.977 }, 00:19:11.977 { 00:19:11.977 "method": "bdev_malloc_create", 00:19:11.977 "params": { 00:19:11.977 "name": "malloc0", 00:19:11.977 "num_blocks": 8192, 00:19:11.977 "block_size": 4096, 00:19:11.977 "physical_block_size": 4096, 00:19:11.977 "uuid": "e7fa94ac-7312-45d5-80ca-0a2b20031ff8", 00:19:11.977 "optimal_io_boundary": 0, 00:19:11.977 "md_size": 0, 00:19:11.977 "dif_type": 0, 00:19:11.977 "dif_is_head_of_md": false, 00:19:11.977 "dif_pi_format": 0 00:19:11.977 } 00:19:11.977 }, 00:19:11.977 { 00:19:11.977 "method": "bdev_wait_for_examine" 00:19:11.977 } 00:19:11.977 ] 00:19:11.977 }, 00:19:11.977 { 00:19:11.977 "subsystem": "nbd", 00:19:11.977 "config": [] 00:19:11.977 }, 00:19:11.977 { 00:19:11.977 "subsystem": "scheduler", 00:19:11.977 "config": [ 00:19:11.977 { 00:19:11.977 "method": "framework_set_scheduler", 00:19:11.977 "params": { 00:19:11.977 "name": "static" 00:19:11.977 } 00:19:11.977 } 00:19:11.977 ] 00:19:11.977 }, 00:19:11.977 { 00:19:11.977 "subsystem": "nvmf", 00:19:11.977 "config": [ 00:19:11.977 { 00:19:11.977 "method": "nvmf_set_config", 00:19:11.977 "params": { 00:19:11.977 "discovery_filter": "match_any", 00:19:11.977 "admin_cmd_passthru": { 00:19:11.977 "identify_ctrlr": false 00:19:11.977 }, 00:19:11.977 "dhchap_digests": [ 00:19:11.977 "sha256", 00:19:11.977 "sha384", 00:19:11.977 "sha512" 00:19:11.977 ], 00:19:11.977 "dhchap_dhgroups": [ 00:19:11.977 "null", 00:19:11.977 "ffdhe2048", 00:19:11.977 "ffdhe3072", 00:19:11.977 "ffdhe4096", 00:19:11.977 "ffdhe6144", 00:19:11.977 "ffdhe8192" 00:19:11.977 ] 00:19:11.977 } 00:19:11.977 }, 00:19:11.977 { 00:19:11.977 "method": "nvmf_set_max_subsystems", 00:19:11.977 "params": { 00:19:11.977 "max_subsystems": 1024 00:19:11.977 } 00:19:11.977 }, 00:19:11.977 { 00:19:11.977 "method": "nvmf_set_crdt", 00:19:11.977 "params": { 00:19:11.977 "crdt1": 0, 00:19:11.977 "crdt2": 0, 00:19:11.977 "crdt3": 0 00:19:11.977 } 00:19:11.977 }, 00:19:11.977 { 00:19:11.977 "method": "nvmf_create_transport", 00:19:11.977 "params": { 00:19:11.977 "trtype": "TCP", 00:19:11.977 "max_queue_depth": 128, 00:19:11.977 "max_io_qpairs_per_ctrlr": 127, 00:19:11.977 "in_capsule_data_size": 4096, 00:19:11.977 "max_io_size": 131072, 00:19:11.977 "io_unit_size": 131072, 00:19:11.977 "max_aq_depth": 128, 00:19:11.977 "num_shared_buffers": 511, 00:19:11.977 "buf_cache_size": 4294967295, 00:19:11.977 "dif_insert_or_strip": false, 00:19:11.977 "zcopy": false, 00:19:11.977 "c2h_success": false, 00:19:11.977 "sock_priority": 0, 00:19:11.977 "abort_timeout_sec": 1, 00:19:11.977 "ack_timeout": 0, 00:19:11.977 "data_wr_pool_size": 0 00:19:11.977 } 00:19:11.977 }, 00:19:11.977 { 00:19:11.977 "method": "nvmf_create_subsystem", 00:19:11.977 "params": { 00:19:11.977 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.977 "allow_any_host": false, 00:19:11.977 "serial_number": "00000000000000000000", 00:19:11.977 "model_number": "SPDK bdev Controller", 00:19:11.977 "max_namespaces": 32, 00:19:11.977 "min_cntlid": 1, 00:19:11.977 "max_cntlid": 65519, 00:19:11.977 "ana_reporting": false 00:19:11.977 } 00:19:11.977 }, 00:19:11.977 { 00:19:11.977 "method": "nvmf_subsystem_add_host", 00:19:11.977 "params": { 00:19:11.977 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.977 "host": "nqn.2016-06.io.spdk:host1", 00:19:11.977 "psk": "key0" 00:19:11.977 } 00:19:11.977 }, 00:19:11.977 { 00:19:11.977 "method": "nvmf_subsystem_add_ns", 00:19:11.977 "params": { 00:19:11.977 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.977 "namespace": { 00:19:11.977 "nsid": 1, 00:19:11.977 "bdev_name": "malloc0", 00:19:11.977 "nguid": "E7FA94AC731245D580CA0A2B20031FF8", 00:19:11.977 "uuid": "e7fa94ac-7312-45d5-80ca-0a2b20031ff8", 00:19:11.977 "no_auto_visible": false 00:19:11.977 } 00:19:11.977 } 00:19:11.977 }, 00:19:11.977 { 00:19:11.977 "method": "nvmf_subsystem_add_listener", 00:19:11.977 "params": { 00:19:11.977 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.977 "listen_address": { 00:19:11.977 "trtype": "TCP", 00:19:11.977 "adrfam": "IPv4", 00:19:11.977 "traddr": "10.0.0.3", 00:19:11.977 "trsvcid": "4420" 00:19:11.977 }, 00:19:11.977 "secure_channel": false, 00:19:11.977 "sock_impl": "ssl" 00:19:11.977 } 00:19:11.977 } 00:19:11.977 ] 00:19:11.977 } 00:19:11.977 ] 00:19:11.977 }' 00:19:11.977 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:12.235 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:12.235 "subsystems": [ 00:19:12.235 { 00:19:12.235 "subsystem": "keyring", 00:19:12.235 "config": [ 00:19:12.235 { 00:19:12.235 "method": "keyring_file_add_key", 00:19:12.235 "params": { 00:19:12.235 "name": "key0", 00:19:12.235 "path": "/tmp/tmp.H8YwNrhVaH" 00:19:12.235 } 00:19:12.235 } 00:19:12.235 ] 00:19:12.235 }, 00:19:12.235 { 00:19:12.235 "subsystem": "iobuf", 00:19:12.235 "config": [ 00:19:12.235 { 00:19:12.235 "method": "iobuf_set_options", 00:19:12.235 "params": { 00:19:12.235 "small_pool_count": 8192, 00:19:12.235 "large_pool_count": 1024, 00:19:12.235 "small_bufsize": 8192, 00:19:12.235 "large_bufsize": 135168, 00:19:12.235 "enable_numa": false 00:19:12.235 } 00:19:12.235 } 00:19:12.235 ] 00:19:12.235 }, 00:19:12.235 { 00:19:12.235 "subsystem": "sock", 00:19:12.235 "config": [ 00:19:12.236 { 00:19:12.236 "method": "sock_set_default_impl", 00:19:12.236 "params": { 00:19:12.236 "impl_name": "uring" 00:19:12.236 } 00:19:12.236 }, 00:19:12.236 { 00:19:12.236 "method": "sock_impl_set_options", 00:19:12.236 "params": { 00:19:12.236 "impl_name": "ssl", 00:19:12.236 "recv_buf_size": 4096, 00:19:12.236 "send_buf_size": 4096, 00:19:12.236 "enable_recv_pipe": true, 00:19:12.236 "enable_quickack": false, 00:19:12.236 "enable_placement_id": 0, 00:19:12.236 "enable_zerocopy_send_server": true, 00:19:12.236 "enable_zerocopy_send_client": false, 00:19:12.236 "zerocopy_threshold": 0, 00:19:12.236 "tls_version": 0, 00:19:12.236 "enable_ktls": false 00:19:12.236 } 00:19:12.236 }, 00:19:12.236 { 00:19:12.236 "method": "sock_impl_set_options", 00:19:12.236 "params": { 00:19:12.236 "impl_name": "posix", 00:19:12.236 "recv_buf_size": 2097152, 00:19:12.236 "send_buf_size": 2097152, 00:19:12.236 "enable_recv_pipe": true, 00:19:12.236 "enable_quickack": false, 00:19:12.236 "enable_placement_id": 0, 00:19:12.236 "enable_zerocopy_send_server": true, 00:19:12.236 "enable_zerocopy_send_client": false, 00:19:12.236 "zerocopy_threshold": 0, 00:19:12.236 "tls_version": 0, 00:19:12.236 "enable_ktls": false 00:19:12.236 } 00:19:12.236 }, 00:19:12.236 { 00:19:12.236 "method": "sock_impl_set_options", 00:19:12.236 "params": { 00:19:12.236 "impl_name": "uring", 00:19:12.236 "recv_buf_size": 2097152, 00:19:12.236 "send_buf_size": 2097152, 00:19:12.236 "enable_recv_pipe": true, 00:19:12.236 "enable_quickack": false, 00:19:12.236 "enable_placement_id": 0, 00:19:12.236 "enable_zerocopy_send_server": false, 00:19:12.236 "enable_zerocopy_send_client": false, 00:19:12.236 "zerocopy_threshold": 0, 00:19:12.236 "tls_version": 0, 00:19:12.236 "enable_ktls": false 00:19:12.236 } 00:19:12.236 } 00:19:12.236 ] 00:19:12.236 }, 00:19:12.236 { 00:19:12.236 "subsystem": "vmd", 00:19:12.236 "config": [] 00:19:12.236 }, 00:19:12.236 { 00:19:12.236 "subsystem": "accel", 00:19:12.236 "config": [ 00:19:12.236 { 00:19:12.236 "method": "accel_set_options", 00:19:12.236 "params": { 00:19:12.236 "small_cache_size": 128, 00:19:12.236 "large_cache_size": 16, 00:19:12.236 "task_count": 2048, 00:19:12.236 "sequence_count": 2048, 00:19:12.236 "buf_count": 2048 00:19:12.236 } 00:19:12.236 } 00:19:12.236 ] 00:19:12.236 }, 00:19:12.236 { 00:19:12.236 "subsystem": "bdev", 00:19:12.236 "config": [ 00:19:12.236 { 00:19:12.236 "method": "bdev_set_options", 00:19:12.236 "params": { 00:19:12.236 "bdev_io_pool_size": 65535, 00:19:12.236 "bdev_io_cache_size": 256, 00:19:12.236 "bdev_auto_examine": true, 00:19:12.236 "iobuf_small_cache_size": 128, 00:19:12.236 "iobuf_large_cache_size": 16 00:19:12.236 } 00:19:12.236 }, 00:19:12.236 { 00:19:12.236 "method": "bdev_raid_set_options", 00:19:12.236 "params": { 00:19:12.236 "process_window_size_kb": 1024, 00:19:12.236 "process_max_bandwidth_mb_sec": 0 00:19:12.236 } 00:19:12.236 }, 00:19:12.236 { 00:19:12.236 "method": "bdev_iscsi_set_options", 00:19:12.236 "params": { 00:19:12.236 "timeout_sec": 30 00:19:12.236 } 00:19:12.236 }, 00:19:12.236 { 00:19:12.236 "method": "bdev_nvme_set_options", 00:19:12.236 "params": { 00:19:12.236 "action_on_timeout": "none", 00:19:12.236 "timeout_us": 0, 00:19:12.236 "timeout_admin_us": 0, 00:19:12.236 "keep_alive_timeout_ms": 10000, 00:19:12.236 "arbitration_burst": 0, 00:19:12.236 "low_priority_weight": 0, 00:19:12.236 "medium_priority_weight": 0, 00:19:12.236 "high_priority_weight": 0, 00:19:12.236 "nvme_adminq_poll_period_us": 10000, 00:19:12.236 "nvme_ioq_poll_period_us": 0, 00:19:12.236 "io_queue_requests": 512, 00:19:12.236 "delay_cmd_submit": true, 00:19:12.236 "transport_retry_count": 4, 00:19:12.236 "bdev_retry_count": 3, 00:19:12.236 "transport_ack_timeout": 0, 00:19:12.236 "ctrlr_loss_timeout_sec": 0, 00:19:12.236 "reconnect_delay_sec": 0, 00:19:12.236 "fast_io_fail_timeout_sec": 0, 00:19:12.236 "disable_auto_failback": false, 00:19:12.236 "generate_uuids": false, 00:19:12.236 "transport_tos": 0, 00:19:12.236 "nvme_error_stat": false, 00:19:12.236 "rdma_srq_size": 0, 00:19:12.236 "io_path_stat": false, 00:19:12.236 "allow_accel_sequence": false, 00:19:12.236 "rdma_max_cq_size": 0, 00:19:12.236 "rdma_cm_event_timeout_ms": 0, 00:19:12.236 "dhchap_digests": [ 00:19:12.236 "sha256", 00:19:12.236 "sha384", 00:19:12.236 "sha512" 00:19:12.236 ], 00:19:12.236 "dhchap_dhgroups": [ 00:19:12.236 "null", 00:19:12.236 "ffdhe2048", 00:19:12.236 "ffdhe3072", 00:19:12.236 "ffdhe4096", 00:19:12.236 "ffdhe6144", 00:19:12.236 "ffdhe8192" 00:19:12.236 ] 00:19:12.236 } 00:19:12.236 }, 00:19:12.236 { 00:19:12.236 "method": "bdev_nvme_attach_controller", 00:19:12.236 "params": { 00:19:12.236 "name": "nvme0", 00:19:12.237 "trtype": "TCP", 00:19:12.237 "adrfam": "IPv4", 00:19:12.237 "traddr": "10.0.0.3", 00:19:12.237 "trsvcid": "4420", 00:19:12.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.237 "prchk_reftag": false, 00:19:12.237 "prchk_guard": false, 00:19:12.237 "ctrlr_loss_timeout_sec": 0, 00:19:12.237 "reconnect_delay_sec": 0, 00:19:12.237 "fast_io_fail_timeout_sec": 0, 00:19:12.237 "psk": "key0", 00:19:12.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:12.237 "hdgst": false, 00:19:12.237 "ddgst": false, 00:19:12.237 "multipath": "multipath" 00:19:12.237 } 00:19:12.237 }, 00:19:12.237 { 00:19:12.237 "method": "bdev_nvme_set_hotplug", 00:19:12.237 "params": { 00:19:12.237 "period_us": 100000, 00:19:12.237 "enable": false 00:19:12.237 } 00:19:12.237 }, 00:19:12.237 { 00:19:12.237 "method": "bdev_enable_histogram", 00:19:12.237 "params": { 00:19:12.237 "name": "nvme0n1", 00:19:12.237 "enable": true 00:19:12.237 } 00:19:12.237 }, 00:19:12.237 { 00:19:12.237 "method": "bdev_wait_for_examine" 00:19:12.237 } 00:19:12.237 ] 00:19:12.237 }, 00:19:12.237 { 00:19:12.237 "subsystem": "nbd", 00:19:12.237 "config": [] 00:19:12.237 } 00:19:12.237 ] 00:19:12.237 }' 00:19:12.237 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 70992 00:19:12.237 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 70992 ']' 00:19:12.237 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 70992 00:19:12.237 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:12.237 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:12.237 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70992 00:19:12.237 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:12.237 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:12.237 killing process with pid 70992 00:19:12.237 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70992' 00:19:12.237 Received shutdown signal, test time was about 1.000000 seconds 00:19:12.237 00:19:12.237 Latency(us) 00:19:12.237 [2024-11-04T14:45:21.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.237 [2024-11-04T14:45:21.377Z] =================================================================================================================== 00:19:12.237 [2024-11-04T14:45:21.377Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:12.237 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 70992 00:19:12.237 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 70992 00:19:12.237 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 70959 00:19:12.237 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 70959 ']' 00:19:12.237 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 70959 00:19:12.237 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:12.237 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:12.237 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70959 00:19:12.503 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:12.503 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:12.503 killing process with pid 70959 00:19:12.503 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70959' 00:19:12.503 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 70959 00:19:12.503 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 70959 00:19:12.503 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:12.503 "subsystems": [ 00:19:12.503 { 00:19:12.503 "subsystem": "keyring", 00:19:12.503 "config": [ 00:19:12.503 { 00:19:12.503 "method": "keyring_file_add_key", 00:19:12.503 "params": { 00:19:12.503 "name": "key0", 00:19:12.503 "path": "/tmp/tmp.H8YwNrhVaH" 00:19:12.503 } 00:19:12.503 } 00:19:12.503 ] 00:19:12.503 }, 00:19:12.503 { 00:19:12.503 "subsystem": "iobuf", 00:19:12.503 "config": [ 00:19:12.503 { 00:19:12.503 "method": "iobuf_set_options", 00:19:12.503 "params": { 00:19:12.503 "small_pool_count": 8192, 00:19:12.503 "large_pool_count": 1024, 00:19:12.503 "small_bufsize": 8192, 00:19:12.503 "large_bufsize": 135168, 00:19:12.503 "enable_numa": false 00:19:12.503 } 00:19:12.503 } 00:19:12.503 ] 00:19:12.503 }, 00:19:12.503 { 00:19:12.503 "subsystem": "sock", 00:19:12.503 "config": [ 00:19:12.503 { 00:19:12.503 "method": "sock_set_default_impl", 00:19:12.503 "params": { 00:19:12.503 "impl_name": "uring" 00:19:12.503 } 00:19:12.503 }, 00:19:12.503 { 00:19:12.503 "method": "sock_impl_set_options", 00:19:12.503 "params": { 00:19:12.503 "impl_name": "ssl", 00:19:12.503 "recv_buf_size": 4096, 00:19:12.503 "send_buf_size": 4096, 00:19:12.503 "enable_recv_pipe": true, 00:19:12.503 "enable_quickack": false, 00:19:12.503 "enable_placement_id": 0, 00:19:12.503 "enable_zerocopy_send_server": true, 00:19:12.503 "enable_zerocopy_send_client": false, 00:19:12.503 "zerocopy_threshold": 0, 00:19:12.503 "tls_version": 0, 00:19:12.503 "enable_ktls": false 00:19:12.503 } 00:19:12.503 }, 00:19:12.503 { 00:19:12.503 "method": "sock_impl_set_options", 00:19:12.503 "params": { 00:19:12.503 "impl_name": "posix", 00:19:12.503 "recv_buf_size": 2097152, 00:19:12.503 "send_buf_size": 2097152, 00:19:12.503 "enable_recv_pipe": true, 00:19:12.503 "enable_quickack": false, 00:19:12.503 "enable_placement_id": 0, 00:19:12.503 "enable_zerocopy_send_server": true, 00:19:12.503 "enable_zerocopy_send_client": false, 00:19:12.503 "zerocopy_threshold": 0, 00:19:12.503 "tls_version": 0, 00:19:12.503 "enable_ktls": false 00:19:12.503 } 00:19:12.503 }, 00:19:12.503 { 00:19:12.503 "method": "sock_impl_set_options", 00:19:12.503 "params": { 00:19:12.503 "impl_name": "uring", 00:19:12.503 "recv_buf_size": 2097152, 00:19:12.503 "send_buf_size": 2097152, 00:19:12.503 "enable_recv_pipe": true, 00:19:12.503 "enable_quickack": false, 00:19:12.503 "enable_placement_id": 0, 00:19:12.503 "enable_zerocopy_send_server": false, 00:19:12.503 "enable_zerocopy_send_client": false, 00:19:12.503 "zerocopy_threshold": 0, 00:19:12.503 "tls_version": 0, 00:19:12.503 "enable_ktls": false 00:19:12.503 } 00:19:12.503 } 00:19:12.503 ] 00:19:12.503 }, 00:19:12.503 { 00:19:12.503 "subsystem": "vmd", 00:19:12.503 "config": [] 00:19:12.503 }, 00:19:12.503 { 00:19:12.503 "subsystem": "accel", 00:19:12.503 "config": [ 00:19:12.503 { 00:19:12.503 "method": "accel_set_options", 00:19:12.503 "params": { 00:19:12.503 "small_cache_size": 128, 00:19:12.503 "large_cache_size": 16, 00:19:12.503 "task_count": 2048, 00:19:12.504 "sequence_count": 2048, 00:19:12.504 "buf_count": 2048 00:19:12.504 } 00:19:12.504 } 00:19:12.504 ] 00:19:12.504 }, 00:19:12.504 { 00:19:12.504 "subsystem": "bdev", 00:19:12.504 "config": [ 00:19:12.504 { 00:19:12.504 "method": "bdev_set_options", 00:19:12.504 "params": { 00:19:12.504 "bdev_io_pool_size": 65535, 00:19:12.504 "bdev_io_cache_size": 256, 00:19:12.504 "bdev_auto_examine": true, 00:19:12.504 "iobuf_small_cache_size": 128, 00:19:12.504 "iobuf_large_cache_size": 16 00:19:12.504 } 00:19:12.504 }, 00:19:12.504 { 00:19:12.504 "method": "bdev_raid_set_options", 00:19:12.504 "params": { 00:19:12.504 "process_window_size_kb": 1024, 00:19:12.504 "process_max_bandwidth_mb_sec": 0 00:19:12.504 } 00:19:12.504 }, 00:19:12.504 { 00:19:12.504 "method": "bdev_iscsi_set_options", 00:19:12.504 "params": { 00:19:12.504 "timeout_sec": 30 00:19:12.504 } 00:19:12.504 }, 00:19:12.504 { 00:19:12.504 "method": "bdev_nvme_set_options", 00:19:12.504 "params": { 00:19:12.504 "action_on_timeout": "none", 00:19:12.504 "timeout_us": 0, 00:19:12.504 "timeout_admin_us": 0, 00:19:12.504 "keep_alive_timeout_ms": 10000, 00:19:12.504 "arbitration_burst": 0, 00:19:12.504 "low_priority_weight": 0, 00:19:12.504 "medium_priority_weight": 0, 00:19:12.504 "high_priority_weight": 0, 00:19:12.504 "nvme_adminq_poll_period_us": 10000, 00:19:12.504 "nvme_ioq_poll_period_us": 0, 00:19:12.504 "io_queue_requests": 0, 00:19:12.504 "delay_cmd_submit": true, 00:19:12.504 "transport_retry_count": 4, 00:19:12.504 "bdev_retry_count": 3, 00:19:12.504 "transport_ack_timeout": 0, 00:19:12.504 "ctrlr_loss_timeout_sec": 0, 00:19:12.504 "reconnect_delay_sec": 0, 00:19:12.504 "fast_io_fail_timeout_sec": 0, 00:19:12.504 "disable_auto_failback": false, 00:19:12.504 "generate_uuids": false, 00:19:12.504 "transport_tos": 0, 00:19:12.504 "nvme_error_stat": false, 00:19:12.504 "rdma_srq_size": 0, 00:19:12.504 "io_path_stat": false, 00:19:12.504 "allow_accel_sequence": false, 00:19:12.504 "rdma_max_cq_size": 0, 00:19:12.504 "rdma_cm_event_timeout_ms": 0, 00:19:12.504 "dhchap_digests": [ 00:19:12.504 "sha256", 00:19:12.504 "sha384", 00:19:12.504 "sha512" 00:19:12.504 ], 00:19:12.504 "dhchap_dhgroups": [ 00:19:12.504 "null", 00:19:12.504 "ffdhe2048", 00:19:12.504 "ffdhe3072", 00:19:12.504 "ffdhe4096", 00:19:12.504 "ffdhe6144", 00:19:12.504 "ffdhe8192" 00:19:12.504 ] 00:19:12.504 } 00:19:12.504 }, 00:19:12.504 { 00:19:12.504 "method": "bdev_nvme_set_hotplug", 00:19:12.504 "params": { 00:19:12.504 "period_us": 100000, 00:19:12.504 "enable": false 00:19:12.504 } 00:19:12.504 }, 00:19:12.504 { 00:19:12.504 "method": "bdev_malloc_create", 00:19:12.504 "params": { 00:19:12.504 "name": "malloc0", 00:19:12.504 "num_blocks": 8192, 00:19:12.504 "block_size": 4096, 00:19:12.504 "physical_block_size": 4096, 00:19:12.504 "uuid": "e7fa94ac-7312-45d5-80ca-0a2b20031ff8", 00:19:12.504 "optimal_io_boundary": 0, 00:19:12.504 "md_size": 0, 00:19:12.504 "dif_type": 0, 00:19:12.504 "dif_is_head_of_md": false, 00:19:12.504 "dif_pi_format": 0 00:19:12.504 } 00:19:12.504 }, 00:19:12.504 { 00:19:12.504 "method": "bdev_wait_for_examine" 00:19:12.504 } 00:19:12.504 ] 00:19:12.504 }, 00:19:12.504 { 00:19:12.504 "subsystem": "nbd", 00:19:12.504 "config": [] 00:19:12.504 }, 00:19:12.504 { 00:19:12.504 "subsystem": "scheduler", 00:19:12.504 "config": [ 00:19:12.504 { 00:19:12.504 "method": "framework_set_scheduler", 00:19:12.504 "params": { 00:19:12.504 "name": "static" 00:19:12.504 } 00:19:12.504 } 00:19:12.504 ] 00:19:12.504 }, 00:19:12.504 { 00:19:12.504 "subsystem": "nvmf", 00:19:12.504 "config": [ 00:19:12.504 { 00:19:12.504 "method": "nvmf_set_config", 00:19:12.504 "params": { 00:19:12.504 "discovery_filter": "match_any", 00:19:12.504 "admin_cmd_passthru": { 00:19:12.504 "identify_ctrlr": false 00:19:12.504 }, 00:19:12.504 "dhchap_digests": [ 00:19:12.504 "sha256", 00:19:12.504 "sha384", 00:19:12.504 "sha512" 00:19:12.504 ], 00:19:12.504 "dhchap_dhgroups": [ 00:19:12.504 "null", 00:19:12.504 "ffdhe2048", 00:19:12.504 "ffdhe3072", 00:19:12.504 "ffdhe4096", 00:19:12.504 "ffdhe6144", 00:19:12.504 "ffdhe8192" 00:19:12.504 ] 00:19:12.504 } 00:19:12.504 }, 00:19:12.504 { 00:19:12.504 "method": "nvmf_set_max_subsystems", 00:19:12.504 "params": { 00:19:12.504 "max_subsystems": 1024 00:19:12.504 } 00:19:12.504 }, 00:19:12.504 { 00:19:12.504 "method": "nvmf_set_crdt", 00:19:12.504 "params": { 00:19:12.504 "crdt1": 0, 00:19:12.504 "crdt2": 0, 00:19:12.504 "crdt3": 0 00:19:12.504 } 00:19:12.504 }, 00:19:12.504 { 00:19:12.504 "method": "nvmf_create_transport", 00:19:12.504 "params": { 00:19:12.504 "trtype": "TCP", 00:19:12.504 "max_queue_depth": 128, 00:19:12.504 "max_io_qpairs_per_ctrlr": 127, 00:19:12.504 "in_capsule_data_size": 4096, 00:19:12.504 "max_io_size": 131072, 00:19:12.504 "io_unit_size": 131072, 00:19:12.504 "max_aq_depth": 128, 00:19:12.504 "num_shared_buffers": 511, 00:19:12.504 "buf_cache_size": 4294967295, 00:19:12.504 "dif_insert_or_strip": false, 00:19:12.504 "zcopy": false, 00:19:12.504 "c2h_success": false, 00:19:12.504 "sock_priority": 0, 00:19:12.504 "abort_timeout_sec": 1, 00:19:12.504 "ack_timeout": 0, 00:19:12.504 "data_wr_pool_size": 0 00:19:12.504 } 00:19:12.504 }, 00:19:12.504 { 00:19:12.504 "method": "nvmf_create_subsystem", 00:19:12.504 "params": { 00:19:12.504 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.504 "allow_any_host": false, 00:19:12.504 "serial_number": "00000000000000000000", 00:19:12.504 "model_number": "SPDK bdev Controller", 00:19:12.504 "max_namespaces": 32, 00:19:12.504 "min_cntlid": 1, 00:19:12.504 "max_cntlid": 65519, 00:19:12.504 "ana_reporting": false 00:19:12.504 } 00:19:12.504 }, 00:19:12.504 { 00:19:12.504 "method": "nvmf_subsystem_add_host", 00:19:12.504 "params": { 00:19:12.504 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.504 "host": "nqn.2016-06.io.spdk:host1", 00:19:12.504 "psk": "key0" 00:19:12.504 } 00:19:12.504 }, 00:19:12.504 { 00:19:12.504 "method": "nvmf_subsystem_add_ns", 00:19:12.504 "params": { 00:19:12.504 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.504 "namespace": { 00:19:12.504 "nsid": 1, 00:19:12.504 "bdev_name": "malloc0", 00:19:12.504 "nguid": "E7FA94AC731245D580CA0A2B20031FF8", 00:19:12.504 "uuid": "e7fa94ac-7312-45d5-80ca-0a2b20031ff8", 00:19:12.504 "no_auto_visible": false 00:19:12.504 } 00:19:12.504 } 00:19:12.504 }, 00:19:12.504 { 00:19:12.504 "method": "nvmf_subsystem_add_listener", 00:19:12.504 "params": { 00:19:12.504 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.504 "listen_address": { 00:19:12.504 "trtype": "TCP", 00:19:12.504 "adrfam": "IPv4", 00:19:12.504 "traddr": "10.0.0.3", 00:19:12.504 "trsvcid": "4420" 00:19:12.504 }, 00:19:12.504 "secure_channel": false, 00:19:12.504 "sock_impl": "ssl" 00:19:12.504 } 00:19:12.504 } 00:19:12.504 ] 00:19:12.504 } 00:19:12.504 ] 00:19:12.504 }' 00:19:12.504 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:12.504 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:12.504 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:12.504 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.504 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71034 00:19:12.504 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:12.504 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71034 00:19:12.504 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71034 ']' 00:19:12.504 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.504 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:12.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.505 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.505 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:12.505 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.505 [2024-11-04 14:45:21.527381] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:19:12.505 [2024-11-04 14:45:21.527444] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.763 [2024-11-04 14:45:21.662507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.763 [2024-11-04 14:45:21.694512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.763 [2024-11-04 14:45:21.694684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.763 [2024-11-04 14:45:21.694722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.763 [2024-11-04 14:45:21.694752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.763 [2024-11-04 14:45:21.694783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.763 [2024-11-04 14:45:21.695063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.763 [2024-11-04 14:45:21.838893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:12.763 [2024-11-04 14:45:21.899476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.021 [2024-11-04 14:45:21.931422] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:13.021 [2024-11-04 14:45:21.931681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:13.279 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:13.279 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:13.279 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:13.279 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:13.279 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.537 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.537 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=71066 00:19:13.537 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 71066 /var/tmp/bdevperf.sock 00:19:13.537 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71066 ']' 00:19:13.537 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.537 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:13.537 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.537 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:13.537 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.537 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:13.537 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:13.537 "subsystems": [ 00:19:13.537 { 00:19:13.537 "subsystem": "keyring", 00:19:13.537 "config": [ 00:19:13.537 { 00:19:13.537 "method": "keyring_file_add_key", 00:19:13.537 "params": { 00:19:13.537 "name": "key0", 00:19:13.537 "path": "/tmp/tmp.H8YwNrhVaH" 00:19:13.537 } 00:19:13.537 } 00:19:13.537 ] 00:19:13.537 }, 00:19:13.537 { 00:19:13.537 "subsystem": "iobuf", 00:19:13.537 "config": [ 00:19:13.537 { 00:19:13.537 "method": "iobuf_set_options", 00:19:13.537 "params": { 00:19:13.537 "small_pool_count": 8192, 00:19:13.537 "large_pool_count": 1024, 00:19:13.537 "small_bufsize": 8192, 00:19:13.537 "large_bufsize": 135168, 00:19:13.537 "enable_numa": false 00:19:13.537 } 00:19:13.537 } 00:19:13.537 ] 00:19:13.537 }, 00:19:13.537 { 00:19:13.537 "subsystem": "sock", 00:19:13.537 "config": [ 00:19:13.537 { 00:19:13.537 "method": "sock_set_default_impl", 00:19:13.537 "params": { 00:19:13.537 "impl_name": "uring" 00:19:13.537 } 00:19:13.537 }, 00:19:13.537 { 00:19:13.537 "method": "sock_impl_set_options", 00:19:13.537 "params": { 00:19:13.537 "impl_name": "ssl", 00:19:13.537 "recv_buf_size": 4096, 00:19:13.537 "send_buf_size": 4096, 00:19:13.537 "enable_recv_pipe": true, 00:19:13.537 "enable_quickack": false, 00:19:13.537 "enable_placement_id": 0, 00:19:13.537 "enable_zerocopy_send_server": true, 00:19:13.538 "enable_zerocopy_send_client": false, 00:19:13.538 "zerocopy_threshold": 0, 00:19:13.538 "tls_version": 0, 00:19:13.538 "enable_ktls": false 00:19:13.538 } 00:19:13.538 }, 00:19:13.538 { 00:19:13.538 "method": "sock_impl_set_options", 00:19:13.538 "params": { 00:19:13.538 "impl_name": "posix", 00:19:13.538 "recv_buf_size": 2097152, 00:19:13.538 "send_buf_size": 2097152, 00:19:13.538 "enable_recv_pipe": true, 00:19:13.538 "enable_quickack": false, 00:19:13.538 "enable_placement_id": 0, 00:19:13.538 "enable_zerocopy_send_server": true, 00:19:13.538 "enable_zerocopy_send_client": false, 00:19:13.538 "zerocopy_threshold": 0, 00:19:13.538 "tls_version": 0, 00:19:13.538 "enable_ktls": false 00:19:13.538 } 00:19:13.538 }, 00:19:13.538 { 00:19:13.538 "method": "sock_impl_set_options", 00:19:13.538 "params": { 00:19:13.538 "impl_name": "uring", 00:19:13.538 "recv_buf_size": 2097152, 00:19:13.538 "send_buf_size": 2097152, 00:19:13.538 "enable_recv_pipe": true, 00:19:13.538 "enable_quickack": false, 00:19:13.538 "enable_placement_id": 0, 00:19:13.538 "enable_zerocopy_send_server": false, 00:19:13.538 "enable_zerocopy_send_client": false, 00:19:13.538 "zerocopy_threshold": 0, 00:19:13.538 "tls_version": 0, 00:19:13.538 "enable_ktls": false 00:19:13.538 } 00:19:13.538 } 00:19:13.538 ] 00:19:13.538 }, 00:19:13.538 { 00:19:13.538 "subsystem": "vmd", 00:19:13.538 "config": [] 00:19:13.538 }, 00:19:13.538 { 00:19:13.538 "subsystem": "accel", 00:19:13.538 "config": [ 00:19:13.538 { 00:19:13.538 "method": "accel_set_options", 00:19:13.538 "params": { 00:19:13.538 "small_cache_size": 128, 00:19:13.538 "large_cache_size": 16, 00:19:13.538 "task_count": 2048, 00:19:13.538 "sequence_count": 2048, 00:19:13.538 "buf_count": 2048 00:19:13.538 } 00:19:13.538 } 00:19:13.538 ] 00:19:13.538 }, 00:19:13.538 { 00:19:13.538 "subsystem": "bdev", 00:19:13.538 "config": [ 00:19:13.538 { 00:19:13.538 "method": "bdev_set_options", 00:19:13.538 "params": { 00:19:13.538 "bdev_io_pool_size": 65535, 00:19:13.538 "bdev_io_cache_size": 256, 00:19:13.538 "bdev_auto_examine": true, 00:19:13.538 "iobuf_small_cache_size": 128, 00:19:13.538 "iobuf_large_cache_size": 16 00:19:13.538 } 00:19:13.538 }, 00:19:13.538 { 00:19:13.538 "method": "bdev_raid_set_options", 00:19:13.538 "params": { 00:19:13.538 "process_window_size_kb": 1024, 00:19:13.538 "process_max_bandwidth_mb_sec": 0 00:19:13.538 } 00:19:13.538 }, 00:19:13.538 { 00:19:13.538 "method": "bdev_iscsi_set_options", 00:19:13.538 "params": { 00:19:13.538 "timeout_sec": 30 00:19:13.538 } 00:19:13.538 }, 00:19:13.538 { 00:19:13.538 "method": "bdev_nvme_set_options", 00:19:13.538 "params": { 00:19:13.538 "action_on_timeout": "none", 00:19:13.538 "timeout_us": 0, 00:19:13.538 "timeout_admin_us": 0, 00:19:13.538 "keep_alive_timeout_ms": 10000, 00:19:13.538 "arbitration_burst": 0, 00:19:13.538 "low_priority_weight": 0, 00:19:13.538 "medium_priority_weight": 0, 00:19:13.538 "high_priority_weight": 0, 00:19:13.538 "nvme_adminq_poll_period_us": 10000, 00:19:13.538 "nvme_ioq_poll_period_us": 0, 00:19:13.538 "io_queue_requests": 512, 00:19:13.538 "delay_cmd_submit": true, 00:19:13.538 "transport_retry_count": 4, 00:19:13.538 "bdev_retry_count": 3, 00:19:13.538 "transport_ack_timeout": 0, 00:19:13.538 "ctrlr_loss_timeout_sec": 0, 00:19:13.538 "reconnect_delay_sec": 0, 00:19:13.538 "fast_io_fail_timeout_sec": 0, 00:19:13.538 "disable_auto_failback": false, 00:19:13.538 "generate_uuids": false, 00:19:13.538 "transport_tos": 0, 00:19:13.538 "nvme_error_stat": false, 00:19:13.538 "rdma_srq_size": 0, 00:19:13.538 "io_path_stat": false, 00:19:13.538 "allow_accel_sequence": false, 00:19:13.538 "rdma_max_cq_size": 0, 00:19:13.538 "rdma_cm_event_timeout_ms": 0, 00:19:13.538 "dhchap_digests": [ 00:19:13.538 "sha256", 00:19:13.538 "sha384", 00:19:13.538 "sha512" 00:19:13.538 ], 00:19:13.538 "dhchap_dhgroups": [ 00:19:13.538 "null", 00:19:13.538 "ffdhe2048", 00:19:13.538 "ffdhe3072", 00:19:13.538 "ffdhe4096", 00:19:13.538 "ffdhe6144", 00:19:13.538 "ffdhe8192" 00:19:13.538 ] 00:19:13.538 } 00:19:13.538 }, 00:19:13.538 { 00:19:13.538 "method": "bdev_nvme_attach_controller", 00:19:13.538 "params": { 00:19:13.538 "name": "nvme0", 00:19:13.538 "trtype": "TCP", 00:19:13.538 "adrfam": "IPv4", 00:19:13.538 "traddr": "10.0.0.3", 00:19:13.538 "trsvcid": "4420", 00:19:13.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.538 "prchk_reftag": false, 00:19:13.538 "prchk_guard": false, 00:19:13.538 "ctrlr_loss_timeout_sec": 0, 00:19:13.538 "reconnect_delay_sec": 0, 00:19:13.538 "fast_io_fail_timeout_sec": 0, 00:19:13.538 "psk": "key0", 00:19:13.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:13.538 "hdgst": false, 00:19:13.538 "ddgst": false, 00:19:13.538 "multipath": "multipath" 00:19:13.538 } 00:19:13.538 }, 00:19:13.538 { 00:19:13.538 "method": "bdev_nvme_set_hotplug", 00:19:13.538 "params": { 00:19:13.538 "period_us": 100000, 00:19:13.538 "enable": false 00:19:13.538 } 00:19:13.538 }, 00:19:13.538 { 00:19:13.538 "method": "bdev_enable_histogram", 00:19:13.538 "params": { 00:19:13.538 "name": "nvme0n1", 00:19:13.538 "enable": true 00:19:13.538 } 00:19:13.538 }, 00:19:13.538 { 00:19:13.538 "method": "bdev_wait_for_examine" 00:19:13.538 } 00:19:13.538 ] 00:19:13.538 }, 00:19:13.538 { 00:19:13.538 "subsystem": "nbd", 00:19:13.538 "config": [] 00:19:13.538 } 00:19:13.538 ] 00:19:13.538 }' 00:19:13.538 [2024-11-04 14:45:22.480748] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:19:13.538 [2024-11-04 14:45:22.481163] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71066 ] 00:19:13.538 [2024-11-04 14:45:22.622443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.538 [2024-11-04 14:45:22.660062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.796 [2024-11-04 14:45:22.773122] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:13.796 [2024-11-04 14:45:22.811056] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.361 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:14.361 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:14.361 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:14.361 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:14.618 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.618 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:14.618 Running I/O for 1 seconds... 00:19:15.551 5530.00 IOPS, 21.60 MiB/s 00:19:15.551 Latency(us) 00:19:15.551 [2024-11-04T14:45:24.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.551 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:15.551 Verification LBA range: start 0x0 length 0x2000 00:19:15.551 nvme0n1 : 1.01 5588.99 21.83 0.00 0.00 22739.12 4007.78 17140.18 00:19:15.551 [2024-11-04T14:45:24.691Z] =================================================================================================================== 00:19:15.551 [2024-11-04T14:45:24.691Z] Total : 5588.99 21.83 0.00 0.00 22739.12 4007.78 17140.18 00:19:15.551 { 00:19:15.551 "results": [ 00:19:15.551 { 00:19:15.551 "job": "nvme0n1", 00:19:15.551 "core_mask": "0x2", 00:19:15.551 "workload": "verify", 00:19:15.551 "status": "finished", 00:19:15.551 "verify_range": { 00:19:15.551 "start": 0, 00:19:15.551 "length": 8192 00:19:15.551 }, 00:19:15.551 "queue_depth": 128, 00:19:15.551 "io_size": 4096, 00:19:15.551 "runtime": 1.012348, 00:19:15.551 "iops": 5588.9871862245, 00:19:15.551 "mibps": 21.831981196189453, 00:19:15.551 "io_failed": 0, 00:19:15.551 "io_timeout": 0, 00:19:15.551 "avg_latency_us": 22739.11601707589, 00:19:15.551 "min_latency_us": 4007.7784615384617, 00:19:15.551 "max_latency_us": 17140.184615384616 00:19:15.551 } 00:19:15.551 ], 00:19:15.551 "core_count": 1 00:19:15.551 } 00:19:15.551 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:15.551 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:15.551 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:15.551 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:19:15.551 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:19:15.551 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:19:15.551 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:15.551 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:19:15.551 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:19:15.551 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:19:15.551 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:15.551 nvmf_trace.0 00:19:15.809 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:19:15.809 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 71066 00:19:15.809 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71066 ']' 00:19:15.809 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71066 00:19:15.809 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:15.809 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:15.809 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71066 00:19:15.809 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:15.810 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:15.810 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71066' 00:19:15.810 killing process with pid 71066 00:19:15.810 Received shutdown signal, test time was about 1.000000 seconds 00:19:15.810 00:19:15.810 Latency(us) 00:19:15.810 [2024-11-04T14:45:24.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.810 [2024-11-04T14:45:24.950Z] =================================================================================================================== 00:19:15.810 [2024-11-04T14:45:24.950Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:15.810 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71066 00:19:15.810 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71066 00:19:15.810 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:15.810 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:15.810 14:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:16.374 rmmod nvme_tcp 00:19:16.374 rmmod nvme_fabrics 00:19:16.374 rmmod nvme_keyring 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 71034 ']' 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 71034 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71034 ']' 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71034 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71034 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:16.374 killing process with pid 71034 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71034' 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71034 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71034 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:16.374 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:16.631 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:16.631 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:16.631 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:16.631 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:16.631 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:16.631 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:16.631 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:16.631 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:16.631 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:16.631 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:16.631 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.631 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.631 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.631 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:19:16.631 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.q24wiLko5H /tmp/tmp.RBeItjjU1m /tmp/tmp.H8YwNrhVaH 00:19:16.631 00:19:16.631 real 1m19.041s 00:19:16.631 user 2m9.666s 00:19:16.631 sys 0m21.626s 00:19:16.631 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:16.631 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.631 ************************************ 00:19:16.631 END TEST nvmf_tls 00:19:16.631 ************************************ 00:19:16.631 14:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:16.631 14:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:16.632 14:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:16.632 14:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:16.632 ************************************ 00:19:16.632 START TEST nvmf_fips 00:19:16.632 ************************************ 00:19:16.632 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:16.632 * Looking for test storage... 00:19:16.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:19:16.632 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:16.632 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:19:16.632 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:16.890 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:16.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.891 --rc genhtml_branch_coverage=1 00:19:16.891 --rc genhtml_function_coverage=1 00:19:16.891 --rc genhtml_legend=1 00:19:16.891 --rc geninfo_all_blocks=1 00:19:16.891 --rc geninfo_unexecuted_blocks=1 00:19:16.891 00:19:16.891 ' 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:16.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.891 --rc genhtml_branch_coverage=1 00:19:16.891 --rc genhtml_function_coverage=1 00:19:16.891 --rc genhtml_legend=1 00:19:16.891 --rc geninfo_all_blocks=1 00:19:16.891 --rc geninfo_unexecuted_blocks=1 00:19:16.891 00:19:16.891 ' 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:16.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.891 --rc genhtml_branch_coverage=1 00:19:16.891 --rc genhtml_function_coverage=1 00:19:16.891 --rc genhtml_legend=1 00:19:16.891 --rc geninfo_all_blocks=1 00:19:16.891 --rc geninfo_unexecuted_blocks=1 00:19:16.891 00:19:16.891 ' 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:16.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.891 --rc genhtml_branch_coverage=1 00:19:16.891 --rc genhtml_function_coverage=1 00:19:16.891 --rc genhtml_legend=1 00:19:16.891 --rc geninfo_all_blocks=1 00:19:16.891 --rc geninfo_unexecuted_blocks=1 00:19:16.891 00:19:16.891 ' 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:16.891 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:16.891 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:19:16.892 Error setting digest 00:19:16.892 4092B0858C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:16.892 4092B0858C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:16.892 Cannot find device "nvmf_init_br" 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:16.892 Cannot find device "nvmf_init_br2" 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:19:16.892 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:16.892 Cannot find device "nvmf_tgt_br" 00:19:16.892 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:19:16.892 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:16.892 Cannot find device "nvmf_tgt_br2" 00:19:16.892 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:19:16.892 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:16.892 Cannot find device "nvmf_init_br" 00:19:16.892 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:19:16.892 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:17.186 Cannot find device "nvmf_init_br2" 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:17.186 Cannot find device "nvmf_tgt_br" 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:17.186 Cannot find device "nvmf_tgt_br2" 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:17.186 Cannot find device "nvmf_br" 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:17.186 Cannot find device "nvmf_init_if" 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:17.186 Cannot find device "nvmf_init_if2" 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:17.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:17.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:17.186 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:17.187 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:17.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:19:17.187 00:19:17.187 --- 10.0.0.3 ping statistics --- 00:19:17.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.187 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:17.187 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:17.187 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:19:17.187 00:19:17.187 --- 10.0.0.4 ping statistics --- 00:19:17.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.187 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:17.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:19:17.187 00:19:17.187 --- 10.0.0.1 ping statistics --- 00:19:17.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.187 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:17.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:19:17.187 00:19:17.187 --- 10.0.0.2 ping statistics --- 00:19:17.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.187 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=71381 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 71381 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 71381 ']' 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:17.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:17.187 14:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:17.445 [2024-11-04 14:45:26.340041] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:19:17.445 [2024-11-04 14:45:26.340136] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.445 [2024-11-04 14:45:26.488627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.445 [2024-11-04 14:45:26.524939] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.445 [2024-11-04 14:45:26.524979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.445 [2024-11-04 14:45:26.524985] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.445 [2024-11-04 14:45:26.524990] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.445 [2024-11-04 14:45:26.524995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.445 [2024-11-04 14:45:26.525276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.445 [2024-11-04 14:45:26.555970] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:18.376 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.lIJ 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.lIJ 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.lIJ 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.lIJ 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:18.377 [2024-11-04 14:45:27.424122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.377 [2024-11-04 14:45:27.440062] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:18.377 [2024-11-04 14:45:27.440219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:18.377 malloc0 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=71417 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 71417 /var/tmp/bdevperf.sock 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 71417 ']' 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:18.377 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:18.635 [2024-11-04 14:45:27.547257] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:19:18.635 [2024-11-04 14:45:27.547325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71417 ] 00:19:18.635 [2024-11-04 14:45:27.686635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.635 [2024-11-04 14:45:27.723363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.635 [2024-11-04 14:45:27.753544] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:19.568 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:19.568 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:19:19.568 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.lIJ 00:19:19.568 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:19.827 [2024-11-04 14:45:28.825037] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:19.827 TLSTESTn1 00:19:19.827 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:20.086 Running I/O for 10 seconds... 00:19:21.947 5551.00 IOPS, 21.68 MiB/s [2024-11-04T14:45:32.057Z] 5572.00 IOPS, 21.77 MiB/s [2024-11-04T14:45:33.428Z] 5701.00 IOPS, 22.27 MiB/s [2024-11-04T14:45:34.360Z] 5868.00 IOPS, 22.92 MiB/s [2024-11-04T14:45:35.293Z] 6072.40 IOPS, 23.72 MiB/s [2024-11-04T14:45:36.245Z] 6213.33 IOPS, 24.27 MiB/s [2024-11-04T14:45:37.192Z] 6217.86 IOPS, 24.29 MiB/s [2024-11-04T14:45:38.124Z] 6229.50 IOPS, 24.33 MiB/s [2024-11-04T14:45:39.085Z] 6239.22 IOPS, 24.37 MiB/s [2024-11-04T14:45:39.085Z] 6255.80 IOPS, 24.44 MiB/s 00:19:29.945 Latency(us) 00:19:29.945 [2024-11-04T14:45:39.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.945 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:29.945 Verification LBA range: start 0x0 length 0x2000 00:19:29.945 TLSTESTn1 : 10.01 6262.52 24.46 0.00 0.00 20407.27 2646.65 18350.08 00:19:29.945 [2024-11-04T14:45:39.085Z] =================================================================================================================== 00:19:29.945 [2024-11-04T14:45:39.085Z] Total : 6262.52 24.46 0.00 0.00 20407.27 2646.65 18350.08 00:19:29.945 { 00:19:29.945 "results": [ 00:19:29.945 { 00:19:29.945 "job": "TLSTESTn1", 00:19:29.945 "core_mask": "0x4", 00:19:29.945 "workload": "verify", 00:19:29.945 "status": "finished", 00:19:29.945 "verify_range": { 00:19:29.945 "start": 0, 00:19:29.945 "length": 8192 00:19:29.945 }, 00:19:29.945 "queue_depth": 128, 00:19:29.945 "io_size": 4096, 00:19:29.945 "runtime": 10.009224, 00:19:29.945 "iops": 6262.523448371222, 00:19:29.945 "mibps": 24.462982220200086, 00:19:29.945 "io_failed": 0, 00:19:29.945 "io_timeout": 0, 00:19:29.945 "avg_latency_us": 20407.271682986062, 00:19:29.945 "min_latency_us": 2646.646153846154, 00:19:29.945 "max_latency_us": 18350.08 00:19:29.945 } 00:19:29.945 ], 00:19:29.945 "core_count": 1 00:19:29.945 } 00:19:29.945 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:29.945 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:29.945 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:19:29.945 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:19:29.945 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:19:29.945 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:29.945 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:19:29.945 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:19:29.945 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:19:29.945 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:29.945 nvmf_trace.0 00:19:30.203 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:19:30.203 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 71417 00:19:30.203 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 71417 ']' 00:19:30.203 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 71417 00:19:30.203 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:19:30.203 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:30.203 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71417 00:19:30.203 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:30.203 killing process with pid 71417 00:19:30.203 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:30.203 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71417' 00:19:30.203 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 71417 00:19:30.203 Received shutdown signal, test time was about 10.000000 seconds 00:19:30.203 00:19:30.203 Latency(us) 00:19:30.203 [2024-11-04T14:45:39.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.203 [2024-11-04T14:45:39.343Z] =================================================================================================================== 00:19:30.203 [2024-11-04T14:45:39.343Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:30.203 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 71417 00:19:30.203 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:30.203 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:30.203 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:30.203 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:30.203 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:30.203 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:30.203 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:30.203 rmmod nvme_tcp 00:19:30.203 rmmod nvme_fabrics 00:19:30.203 rmmod nvme_keyring 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 71381 ']' 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 71381 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 71381 ']' 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 71381 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71381 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:30.461 killing process with pid 71381 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71381' 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 71381 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 71381 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:30.461 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.lIJ 00:19:30.719 00:19:30.719 real 0m14.045s 00:19:30.719 user 0m20.548s 00:19:30.719 sys 0m4.608s 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:30.719 ************************************ 00:19:30.719 END TEST nvmf_fips 00:19:30.719 ************************************ 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:30.719 ************************************ 00:19:30.719 START TEST nvmf_control_msg_list 00:19:30.719 ************************************ 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:30.719 * Looking for test storage... 00:19:30.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:30.719 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:30.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.978 --rc genhtml_branch_coverage=1 00:19:30.978 --rc genhtml_function_coverage=1 00:19:30.978 --rc genhtml_legend=1 00:19:30.978 --rc geninfo_all_blocks=1 00:19:30.978 --rc geninfo_unexecuted_blocks=1 00:19:30.978 00:19:30.978 ' 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:30.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.978 --rc genhtml_branch_coverage=1 00:19:30.978 --rc genhtml_function_coverage=1 00:19:30.978 --rc genhtml_legend=1 00:19:30.978 --rc geninfo_all_blocks=1 00:19:30.978 --rc geninfo_unexecuted_blocks=1 00:19:30.978 00:19:30.978 ' 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:30.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.978 --rc genhtml_branch_coverage=1 00:19:30.978 --rc genhtml_function_coverage=1 00:19:30.978 --rc genhtml_legend=1 00:19:30.978 --rc geninfo_all_blocks=1 00:19:30.978 --rc geninfo_unexecuted_blocks=1 00:19:30.978 00:19:30.978 ' 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:30.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.978 --rc genhtml_branch_coverage=1 00:19:30.978 --rc genhtml_function_coverage=1 00:19:30.978 --rc genhtml_legend=1 00:19:30.978 --rc geninfo_all_blocks=1 00:19:30.978 --rc geninfo_unexecuted_blocks=1 00:19:30.978 00:19:30.978 ' 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.978 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:30.979 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:30.979 Cannot find device "nvmf_init_br" 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:30.979 Cannot find device "nvmf_init_br2" 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:30.979 Cannot find device "nvmf_tgt_br" 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:30.979 Cannot find device "nvmf_tgt_br2" 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:30.979 Cannot find device "nvmf_init_br" 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:19:30.979 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:30.979 Cannot find device "nvmf_init_br2" 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:30.979 Cannot find device "nvmf_tgt_br" 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:30.979 Cannot find device "nvmf_tgt_br2" 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:30.979 Cannot find device "nvmf_br" 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:30.979 Cannot find device "nvmf_init_if" 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:30.979 Cannot find device "nvmf_init_if2" 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:30.979 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:30.979 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:30.979 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:31.238 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:31.238 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:19:31.238 00:19:31.238 --- 10.0.0.3 ping statistics --- 00:19:31.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.238 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:31.238 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:31.238 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:19:31.238 00:19:31.238 --- 10.0.0.4 ping statistics --- 00:19:31.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.238 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:31.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:19:31.238 00:19:31.238 --- 10.0.0.1 ping statistics --- 00:19:31.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.238 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:31.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:19:31.238 00:19:31.238 --- 10.0.0.2 ping statistics --- 00:19:31.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.238 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=71805 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 71805 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 71805 ']' 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:31.238 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:31.238 [2024-11-04 14:45:40.262672] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:19:31.238 [2024-11-04 14:45:40.262731] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.496 [2024-11-04 14:45:40.402838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.496 [2024-11-04 14:45:40.437621] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.496 [2024-11-04 14:45:40.437660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.496 [2024-11-04 14:45:40.437667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.496 [2024-11-04 14:45:40.437672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.496 [2024-11-04 14:45:40.437677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.496 [2024-11-04 14:45:40.437923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.496 [2024-11-04 14:45:40.468846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.063 [2024-11-04 14:45:41.164293] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.063 Malloc0 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.063 [2024-11-04 14:45:41.198997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:32.063 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.319 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=71837 00:19:32.319 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:32.319 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=71838 00:19:32.319 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:32.319 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=71839 00:19:32.319 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 71837 00:19:32.319 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:32.319 [2024-11-04 14:45:41.357215] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:32.319 [2024-11-04 14:45:41.367321] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:32.319 [2024-11-04 14:45:41.377370] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:33.252 Initializing NVMe Controllers 00:19:33.252 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:33.252 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:33.252 Initialization complete. Launching workers. 00:19:33.252 ======================================================== 00:19:33.252 Latency(us) 00:19:33.252 Device Information : IOPS MiB/s Average min max 00:19:33.252 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4446.99 17.37 224.67 90.55 676.44 00:19:33.252 ======================================================== 00:19:33.252 Total : 4446.99 17.37 224.67 90.55 676.44 00:19:33.252 00:19:33.252 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 71838 00:19:33.252 Initializing NVMe Controllers 00:19:33.252 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:33.252 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:33.252 Initialization complete. Launching workers. 00:19:33.252 ======================================================== 00:19:33.252 Latency(us) 00:19:33.252 Device Information : IOPS MiB/s Average min max 00:19:33.252 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4399.99 17.19 227.02 148.81 1383.90 00:19:33.252 ======================================================== 00:19:33.252 Total : 4399.99 17.19 227.02 148.81 1383.90 00:19:33.252 00:19:33.510 Initializing NVMe Controllers 00:19:33.510 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:33.510 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:33.510 Initialization complete. Launching workers. 00:19:33.510 ======================================================== 00:19:33.510 Latency(us) 00:19:33.510 Device Information : IOPS MiB/s Average min max 00:19:33.510 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4470.00 17.46 223.44 88.04 676.94 00:19:33.510 ======================================================== 00:19:33.510 Total : 4470.00 17.46 223.44 88.04 676.94 00:19:33.510 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 71839 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:33.510 rmmod nvme_tcp 00:19:33.510 rmmod nvme_fabrics 00:19:33.510 rmmod nvme_keyring 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 71805 ']' 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 71805 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 71805 ']' 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 71805 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71805 00:19:33.510 killing process with pid 71805 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71805' 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 71805 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 71805 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:33.510 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:19:33.770 00:19:33.770 real 0m3.086s 00:19:33.770 user 0m5.362s 00:19:33.770 sys 0m1.026s 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:33.770 ************************************ 00:19:33.770 END TEST nvmf_control_msg_list 00:19:33.770 ************************************ 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:33.770 ************************************ 00:19:33.770 START TEST nvmf_wait_for_buf 00:19:33.770 ************************************ 00:19:33.770 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:34.029 * Looking for test storage... 00:19:34.029 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:34.029 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:34.029 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:19:34.029 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:34.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.029 --rc genhtml_branch_coverage=1 00:19:34.029 --rc genhtml_function_coverage=1 00:19:34.029 --rc genhtml_legend=1 00:19:34.029 --rc geninfo_all_blocks=1 00:19:34.029 --rc geninfo_unexecuted_blocks=1 00:19:34.029 00:19:34.029 ' 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:34.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.029 --rc genhtml_branch_coverage=1 00:19:34.029 --rc genhtml_function_coverage=1 00:19:34.029 --rc genhtml_legend=1 00:19:34.029 --rc geninfo_all_blocks=1 00:19:34.029 --rc geninfo_unexecuted_blocks=1 00:19:34.029 00:19:34.029 ' 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:34.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.029 --rc genhtml_branch_coverage=1 00:19:34.029 --rc genhtml_function_coverage=1 00:19:34.029 --rc genhtml_legend=1 00:19:34.029 --rc geninfo_all_blocks=1 00:19:34.029 --rc geninfo_unexecuted_blocks=1 00:19:34.029 00:19:34.029 ' 00:19:34.029 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:34.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.029 --rc genhtml_branch_coverage=1 00:19:34.029 --rc genhtml_function_coverage=1 00:19:34.029 --rc genhtml_legend=1 00:19:34.029 --rc geninfo_all_blocks=1 00:19:34.029 --rc geninfo_unexecuted_blocks=1 00:19:34.029 00:19:34.030 ' 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:34.030 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:34.030 Cannot find device "nvmf_init_br" 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:34.030 Cannot find device "nvmf_init_br2" 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:34.030 Cannot find device "nvmf_tgt_br" 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:34.030 Cannot find device "nvmf_tgt_br2" 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:34.030 Cannot find device "nvmf_init_br" 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:34.030 Cannot find device "nvmf_init_br2" 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:34.030 Cannot find device "nvmf_tgt_br" 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:34.030 Cannot find device "nvmf_tgt_br2" 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:19:34.030 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:34.030 Cannot find device "nvmf_br" 00:19:34.031 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:19:34.031 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:34.031 Cannot find device "nvmf_init_if" 00:19:34.031 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:19:34.031 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:34.031 Cannot find device "nvmf_init_if2" 00:19:34.031 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:19:34.031 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:34.031 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:34.031 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:19:34.031 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:34.031 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:34.031 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:19:34.031 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:34.289 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:34.289 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:19:34.289 00:19:34.289 --- 10.0.0.3 ping statistics --- 00:19:34.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.289 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:34.289 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:34.289 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:19:34.289 00:19:34.289 --- 10.0.0.4 ping statistics --- 00:19:34.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.289 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:34.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:34.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:34.289 00:19:34.289 --- 10.0.0.1 ping statistics --- 00:19:34.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.289 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:34.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:34.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:19:34.289 00:19:34.289 --- 10.0.0.2 ping statistics --- 00:19:34.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.289 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:34.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=72066 00:19:34.289 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 72066 00:19:34.290 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:34.290 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 72066 ']' 00:19:34.290 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.290 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:34.290 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.290 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:34.290 14:45:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:34.290 [2024-11-04 14:45:43.380442] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:19:34.290 [2024-11-04 14:45:43.380503] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.548 [2024-11-04 14:45:43.519271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.548 [2024-11-04 14:45:43.554714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.548 [2024-11-04 14:45:43.554757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.548 [2024-11-04 14:45:43.554764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.548 [2024-11-04 14:45:43.554768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.548 [2024-11-04 14:45:43.554773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.548 [2024-11-04 14:45:43.555031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.114 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:35.114 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:19:35.114 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:35.114 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:35.114 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.371 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.372 [2024-11-04 14:45:44.322394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.372 Malloc0 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.372 [2024-11-04 14:45:44.363887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.372 [2024-11-04 14:45:44.387942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.372 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:35.630 [2024-11-04 14:45:44.582696] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:37.000 Initializing NVMe Controllers 00:19:37.000 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:37.000 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:37.000 Initialization complete. Launching workers. 00:19:37.000 ======================================================== 00:19:37.000 Latency(us) 00:19:37.000 Device Information : IOPS MiB/s Average min max 00:19:37.000 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 504.00 63.00 7992.09 6906.81 8166.31 00:19:37.000 ======================================================== 00:19:37.000 Total : 504.00 63.00 7992.09 6906.81 8166.31 00:19:37.000 00:19:37.000 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:37.000 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:37.000 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4788 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4788 -eq 0 ]] 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:37.001 rmmod nvme_tcp 00:19:37.001 rmmod nvme_fabrics 00:19:37.001 rmmod nvme_keyring 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 72066 ']' 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 72066 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 72066 ']' 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 72066 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:37.001 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72066 00:19:37.001 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:37.001 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:37.001 killing process with pid 72066 00:19:37.001 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72066' 00:19:37.001 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 72066 00:19:37.001 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 72066 00:19:37.001 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:37.001 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:37.001 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:37.001 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:37.001 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:37.001 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:37.001 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:37.001 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:37.001 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:37.001 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:37.001 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:37.258 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:37.258 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:37.258 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:37.258 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:37.258 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:37.258 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:37.258 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:37.258 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:37.258 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:37.258 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:37.258 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:37.258 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:37.258 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.258 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:37.258 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.258 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:19:37.258 00:19:37.258 real 0m3.471s 00:19:37.258 user 0m3.062s 00:19:37.258 sys 0m0.596s 00:19:37.258 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:37.258 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.258 ************************************ 00:19:37.258 END TEST nvmf_wait_for_buf 00:19:37.258 ************************************ 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:37.518 ************************************ 00:19:37.518 START TEST nvmf_nsid 00:19:37.518 ************************************ 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:19:37.518 * Looking for test storage... 00:19:37.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:37.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.518 --rc genhtml_branch_coverage=1 00:19:37.518 --rc genhtml_function_coverage=1 00:19:37.518 --rc genhtml_legend=1 00:19:37.518 --rc geninfo_all_blocks=1 00:19:37.518 --rc geninfo_unexecuted_blocks=1 00:19:37.518 00:19:37.518 ' 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:37.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.518 --rc genhtml_branch_coverage=1 00:19:37.518 --rc genhtml_function_coverage=1 00:19:37.518 --rc genhtml_legend=1 00:19:37.518 --rc geninfo_all_blocks=1 00:19:37.518 --rc geninfo_unexecuted_blocks=1 00:19:37.518 00:19:37.518 ' 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:37.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.518 --rc genhtml_branch_coverage=1 00:19:37.518 --rc genhtml_function_coverage=1 00:19:37.518 --rc genhtml_legend=1 00:19:37.518 --rc geninfo_all_blocks=1 00:19:37.518 --rc geninfo_unexecuted_blocks=1 00:19:37.518 00:19:37.518 ' 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:37.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.518 --rc genhtml_branch_coverage=1 00:19:37.518 --rc genhtml_function_coverage=1 00:19:37.518 --rc genhtml_legend=1 00:19:37.518 --rc geninfo_all_blocks=1 00:19:37.518 --rc geninfo_unexecuted_blocks=1 00:19:37.518 00:19:37.518 ' 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.518 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:37.519 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:37.519 Cannot find device "nvmf_init_br" 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:37.519 Cannot find device "nvmf_init_br2" 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:37.519 Cannot find device "nvmf_tgt_br" 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:37.519 Cannot find device "nvmf_tgt_br2" 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:37.519 Cannot find device "nvmf_init_br" 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:37.519 Cannot find device "nvmf_init_br2" 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:37.519 Cannot find device "nvmf_tgt_br" 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:37.519 Cannot find device "nvmf_tgt_br2" 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:37.519 Cannot find device "nvmf_br" 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:37.519 Cannot find device "nvmf_init_if" 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:19:37.519 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:37.777 Cannot find device "nvmf_init_if2" 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:37.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:37.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:37.777 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:37.778 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:37.778 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:19:37.778 00:19:37.778 --- 10.0.0.3 ping statistics --- 00:19:37.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.778 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:37.778 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:37.778 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:19:37.778 00:19:37.778 --- 10.0.0.4 ping statistics --- 00:19:37.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.778 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:37.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:37.778 00:19:37.778 --- 10.0.0.1 ping statistics --- 00:19:37.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.778 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:37.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:19:37.778 00:19:37.778 --- 10.0.0.2 ping statistics --- 00:19:37.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.778 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=72326 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 72326 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 72326 ']' 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:37.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:37.778 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:38.044 [2024-11-04 14:45:46.940865] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:19:38.044 [2024-11-04 14:45:46.940931] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.044 [2024-11-04 14:45:47.078871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.044 [2024-11-04 14:45:47.113802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.044 [2024-11-04 14:45:47.113843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.044 [2024-11-04 14:45:47.113850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.044 [2024-11-04 14:45:47.113855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.044 [2024-11-04 14:45:47.113859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.044 [2024-11-04 14:45:47.114118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.044 [2024-11-04 14:45:47.144392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=72351 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=2c8d4056-e41b-4103-a100-7a718520d84d 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=e9b0c136-0bba-4587-a5cc-3c9e9d08c044 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=cf132a27-4f80-4fea-9323-b233f7524da3 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:38.302 null0 00:19:38.302 null1 00:19:38.302 [2024-11-04 14:45:47.266224] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:19:38.302 [2024-11-04 14:45:47.266284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72351 ] 00:19:38.302 null2 00:19:38.302 [2024-11-04 14:45:47.273972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.302 [2024-11-04 14:45:47.298052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 72351 /var/tmp/tgt2.sock 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 72351 ']' 00:19:38.302 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:19:38.303 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:38.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:19:38.303 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:19:38.303 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:38.303 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:38.303 [2024-11-04 14:45:47.406476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.560 [2024-11-04 14:45:47.443843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.560 [2024-11-04 14:45:47.490321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:38.560 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:38.560 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:19:38.560 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:19:39.159 [2024-11-04 14:45:48.019377] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.159 [2024-11-04 14:45:48.035434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:19:39.159 nvme0n1 nvme0n2 00:19:39.159 nvme1n1 00:19:39.159 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:19:39.159 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:19:39.159 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:19:39.159 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:19:39.159 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:19:39.159 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:19:39.159 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:19:39.159 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:19:39.159 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:19:39.159 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:19:39.159 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:19:39.159 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:39.159 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:19:39.159 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:19:39.159 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:19:39.159 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:19:40.094 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:19:40.094 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:40.094 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:40.094 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 2c8d4056-e41b-4103-a100-7a718520d84d 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2c8d4056e41b4103a1007a718520d84d 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2C8D4056E41B4103A1007A718520D84D 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 2C8D4056E41B4103A1007A718520D84D == \2\C\8\D\4\0\5\6\E\4\1\B\4\1\0\3\A\1\0\0\7\A\7\1\8\5\2\0\D\8\4\D ]] 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid e9b0c136-0bba-4587-a5cc-3c9e9d08c044 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e9b0c1360bba4587a5cc3c9e9d08c044 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E9B0C1360BBA4587A5CC3C9E9D08C044 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ E9B0C1360BBA4587A5CC3C9E9D08C044 == \E\9\B\0\C\1\3\6\0\B\B\A\4\5\8\7\A\5\C\C\3\C\9\E\9\D\0\8\C\0\4\4 ]] 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid cf132a27-4f80-4fea-9323-b233f7524da3 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=cf132a274f804fea9323b233f7524da3 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo CF132A274F804FEA9323B233F7524DA3 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ CF132A274F804FEA9323B233F7524DA3 == \C\F\1\3\2\A\2\7\4\F\8\0\4\F\E\A\9\3\2\3\B\2\3\3\F\7\5\2\4\D\A\3 ]] 00:19:40.353 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:19:40.612 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:19:40.612 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:19:40.612 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 72351 00:19:40.612 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 72351 ']' 00:19:40.612 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 72351 00:19:40.612 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:19:40.612 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:40.612 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72351 00:19:40.612 killing process with pid 72351 00:19:40.612 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:40.612 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:40.612 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72351' 00:19:40.612 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 72351 00:19:40.612 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 72351 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:40.870 rmmod nvme_tcp 00:19:40.870 rmmod nvme_fabrics 00:19:40.870 rmmod nvme_keyring 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 72326 ']' 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 72326 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 72326 ']' 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 72326 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72326 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72326' 00:19:40.870 killing process with pid 72326 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 72326 00:19:40.870 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 72326 00:19:40.870 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:40.870 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:40.870 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:40.870 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:19:41.128 00:19:41.128 real 0m3.847s 00:19:41.128 user 0m5.775s 00:19:41.128 sys 0m1.194s 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:41.128 ************************************ 00:19:41.128 END TEST nvmf_nsid 00:19:41.128 ************************************ 00:19:41.128 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:41.421 14:45:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:19:41.421 00:19:41.421 real 4m21.537s 00:19:41.421 user 8m57.883s 00:19:41.421 sys 0m50.410s 00:19:41.421 14:45:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:41.421 14:45:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:41.421 ************************************ 00:19:41.421 END TEST nvmf_target_extra 00:19:41.421 ************************************ 00:19:41.421 14:45:50 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:41.421 14:45:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:41.421 14:45:50 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:41.421 14:45:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:41.421 ************************************ 00:19:41.421 START TEST nvmf_host 00:19:41.421 ************************************ 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:41.421 * Looking for test storage... 00:19:41.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:41.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.421 --rc genhtml_branch_coverage=1 00:19:41.421 --rc genhtml_function_coverage=1 00:19:41.421 --rc genhtml_legend=1 00:19:41.421 --rc geninfo_all_blocks=1 00:19:41.421 --rc geninfo_unexecuted_blocks=1 00:19:41.421 00:19:41.421 ' 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:41.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.421 --rc genhtml_branch_coverage=1 00:19:41.421 --rc genhtml_function_coverage=1 00:19:41.421 --rc genhtml_legend=1 00:19:41.421 --rc geninfo_all_blocks=1 00:19:41.421 --rc geninfo_unexecuted_blocks=1 00:19:41.421 00:19:41.421 ' 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:41.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.421 --rc genhtml_branch_coverage=1 00:19:41.421 --rc genhtml_function_coverage=1 00:19:41.421 --rc genhtml_legend=1 00:19:41.421 --rc geninfo_all_blocks=1 00:19:41.421 --rc geninfo_unexecuted_blocks=1 00:19:41.421 00:19:41.421 ' 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:41.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.421 --rc genhtml_branch_coverage=1 00:19:41.421 --rc genhtml_function_coverage=1 00:19:41.421 --rc genhtml_legend=1 00:19:41.421 --rc geninfo_all_blocks=1 00:19:41.421 --rc geninfo_unexecuted_blocks=1 00:19:41.421 00:19:41.421 ' 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.421 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:41.422 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.422 ************************************ 00:19:41.422 START TEST nvmf_identify 00:19:41.422 ************************************ 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:41.422 * Looking for test storage... 00:19:41.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:41.422 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:41.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.680 --rc genhtml_branch_coverage=1 00:19:41.680 --rc genhtml_function_coverage=1 00:19:41.680 --rc genhtml_legend=1 00:19:41.680 --rc geninfo_all_blocks=1 00:19:41.680 --rc geninfo_unexecuted_blocks=1 00:19:41.680 00:19:41.680 ' 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:41.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.680 --rc genhtml_branch_coverage=1 00:19:41.680 --rc genhtml_function_coverage=1 00:19:41.680 --rc genhtml_legend=1 00:19:41.680 --rc geninfo_all_blocks=1 00:19:41.680 --rc geninfo_unexecuted_blocks=1 00:19:41.680 00:19:41.680 ' 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:41.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.680 --rc genhtml_branch_coverage=1 00:19:41.680 --rc genhtml_function_coverage=1 00:19:41.680 --rc genhtml_legend=1 00:19:41.680 --rc geninfo_all_blocks=1 00:19:41.680 --rc geninfo_unexecuted_blocks=1 00:19:41.680 00:19:41.680 ' 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:41.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.680 --rc genhtml_branch_coverage=1 00:19:41.680 --rc genhtml_function_coverage=1 00:19:41.680 --rc genhtml_legend=1 00:19:41.680 --rc geninfo_all_blocks=1 00:19:41.680 --rc geninfo_unexecuted_blocks=1 00:19:41.680 00:19:41.680 ' 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:19:41.680 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:41.681 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:41.681 Cannot find device "nvmf_init_br" 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:41.681 Cannot find device "nvmf_init_br2" 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:41.681 Cannot find device "nvmf_tgt_br" 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:41.681 Cannot find device "nvmf_tgt_br2" 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:41.681 Cannot find device "nvmf_init_br" 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:41.681 Cannot find device "nvmf_init_br2" 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:41.681 Cannot find device "nvmf_tgt_br" 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:41.681 Cannot find device "nvmf_tgt_br2" 00:19:41.681 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:19:41.682 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:41.682 Cannot find device "nvmf_br" 00:19:41.682 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:19:41.682 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:41.682 Cannot find device "nvmf_init_if" 00:19:41.682 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:19:41.682 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:41.682 Cannot find device "nvmf_init_if2" 00:19:41.682 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:19:41.682 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:41.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:41.682 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:19:41.682 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:41.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:41.682 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:19:41.682 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:41.682 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:41.682 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:41.682 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:41.682 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:41.682 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:41.682 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:41.940 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:41.940 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:19:41.940 00:19:41.940 --- 10.0.0.3 ping statistics --- 00:19:41.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.940 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:41.940 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:41.940 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:19:41.940 00:19:41.940 --- 10.0.0.4 ping statistics --- 00:19:41.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.940 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:41.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:19:41.940 00:19:41.940 --- 10.0.0.1 ping statistics --- 00:19:41.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.940 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:41.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:19:41.940 00:19:41.940 --- 10.0.0.2 ping statistics --- 00:19:41.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.940 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=72697 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 72697 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 72697 ']' 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:41.940 14:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:41.940 [2024-11-04 14:45:50.991098] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:19:41.940 [2024-11-04 14:45:50.991155] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.199 [2024-11-04 14:45:51.130524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:42.199 [2024-11-04 14:45:51.166871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.199 [2024-11-04 14:45:51.167041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.199 [2024-11-04 14:45:51.167098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.199 [2024-11-04 14:45:51.167151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.199 [2024-11-04 14:45:51.167218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.199 [2024-11-04 14:45:51.167997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.199 [2024-11-04 14:45:51.168219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.199 [2024-11-04 14:45:51.168712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:42.199 [2024-11-04 14:45:51.168713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.199 [2024-11-04 14:45:51.198634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:42.765 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:42.765 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:19:42.765 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:42.765 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.765 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:42.765 [2024-11-04 14:45:51.861614] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.765 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.765 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:42.765 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:42.765 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:42.765 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:42.765 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.765 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:43.023 Malloc0 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:43.023 [2024-11-04 14:45:51.953164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.023 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:43.023 [ 00:19:43.023 { 00:19:43.023 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:43.023 "subtype": "Discovery", 00:19:43.023 "listen_addresses": [ 00:19:43.023 { 00:19:43.023 "trtype": "TCP", 00:19:43.023 "adrfam": "IPv4", 00:19:43.023 "traddr": "10.0.0.3", 00:19:43.023 "trsvcid": "4420" 00:19:43.023 } 00:19:43.023 ], 00:19:43.023 "allow_any_host": true, 00:19:43.023 "hosts": [] 00:19:43.023 }, 00:19:43.023 { 00:19:43.023 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.023 "subtype": "NVMe", 00:19:43.023 "listen_addresses": [ 00:19:43.023 { 00:19:43.023 "trtype": "TCP", 00:19:43.023 "adrfam": "IPv4", 00:19:43.023 "traddr": "10.0.0.3", 00:19:43.023 "trsvcid": "4420" 00:19:43.023 } 00:19:43.023 ], 00:19:43.023 "allow_any_host": true, 00:19:43.023 "hosts": [], 00:19:43.023 "serial_number": "SPDK00000000000001", 00:19:43.023 "model_number": "SPDK bdev Controller", 00:19:43.023 "max_namespaces": 32, 00:19:43.023 "min_cntlid": 1, 00:19:43.023 "max_cntlid": 65519, 00:19:43.023 "namespaces": [ 00:19:43.023 { 00:19:43.024 "nsid": 1, 00:19:43.024 "bdev_name": "Malloc0", 00:19:43.024 "name": "Malloc0", 00:19:43.024 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:43.024 "eui64": "ABCDEF0123456789", 00:19:43.024 "uuid": "84267f48-75fe-4fb7-bac6-b3847b3cad2a" 00:19:43.024 } 00:19:43.024 ] 00:19:43.024 } 00:19:43.024 ] 00:19:43.024 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.024 14:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:43.024 [2024-11-04 14:45:51.997513] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:19:43.024 [2024-11-04 14:45:51.997548] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72731 ] 00:19:43.024 [2024-11-04 14:45:52.150101] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:19:43.024 [2024-11-04 14:45:52.150157] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:43.024 [2024-11-04 14:45:52.150160] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:43.024 [2024-11-04 14:45:52.150171] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:43.024 [2024-11-04 14:45:52.150179] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:43.024 [2024-11-04 14:45:52.150441] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:19:43.024 [2024-11-04 14:45:52.150493] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x236f750 0 00:19:43.290 [2024-11-04 14:45:52.165620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:43.290 [2024-11-04 14:45:52.165636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:43.290 [2024-11-04 14:45:52.165640] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:43.290 [2024-11-04 14:45:52.165642] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:43.290 [2024-11-04 14:45:52.165666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.290 [2024-11-04 14:45:52.165670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.290 [2024-11-04 14:45:52.165673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f750) 00:19:43.290 [2024-11-04 14:45:52.165685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:43.290 [2024-11-04 14:45:52.165708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3740, cid 0, qid 0 00:19:43.290 [2024-11-04 14:45:52.173620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.290 [2024-11-04 14:45:52.173634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.290 [2024-11-04 14:45:52.173637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.290 [2024-11-04 14:45:52.173641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3740) on tqpair=0x236f750 00:19:43.290 [2024-11-04 14:45:52.173650] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:43.290 [2024-11-04 14:45:52.173656] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:19:43.290 [2024-11-04 14:45:52.173660] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:19:43.290 [2024-11-04 14:45:52.173671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.290 [2024-11-04 14:45:52.173675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.290 [2024-11-04 14:45:52.173677] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f750) 00:19:43.290 [2024-11-04 14:45:52.173684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.290 [2024-11-04 14:45:52.173701] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3740, cid 0, qid 0 00:19:43.290 [2024-11-04 14:45:52.173737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.290 [2024-11-04 14:45:52.173741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.290 [2024-11-04 14:45:52.173743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.290 [2024-11-04 14:45:52.173746] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3740) on tqpair=0x236f750 00:19:43.290 [2024-11-04 14:45:52.173750] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:19:43.290 [2024-11-04 14:45:52.173755] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:19:43.290 [2024-11-04 14:45:52.173760] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.290 [2024-11-04 14:45:52.173763] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.290 [2024-11-04 14:45:52.173765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f750) 00:19:43.290 [2024-11-04 14:45:52.173770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.290 [2024-11-04 14:45:52.173780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3740, cid 0, qid 0 00:19:43.290 [2024-11-04 14:45:52.173811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.290 [2024-11-04 14:45:52.173815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.290 [2024-11-04 14:45:52.173818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.290 [2024-11-04 14:45:52.173820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3740) on tqpair=0x236f750 00:19:43.290 [2024-11-04 14:45:52.173824] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:19:43.290 [2024-11-04 14:45:52.173831] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:19:43.290 [2024-11-04 14:45:52.173836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.290 [2024-11-04 14:45:52.173838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.290 [2024-11-04 14:45:52.173841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f750) 00:19:43.290 [2024-11-04 14:45:52.173846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.290 [2024-11-04 14:45:52.173856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3740, cid 0, qid 0 00:19:43.290 [2024-11-04 14:45:52.173884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.290 [2024-11-04 14:45:52.173888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.290 [2024-11-04 14:45:52.173891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.290 [2024-11-04 14:45:52.173893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3740) on tqpair=0x236f750 00:19:43.290 [2024-11-04 14:45:52.173897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:43.290 [2024-11-04 14:45:52.173904] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.290 [2024-11-04 14:45:52.173906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.290 [2024-11-04 14:45:52.173909] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f750) 00:19:43.290 [2024-11-04 14:45:52.173914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.290 [2024-11-04 14:45:52.173924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3740, cid 0, qid 0 00:19:43.290 [2024-11-04 14:45:52.173951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.290 [2024-11-04 14:45:52.173956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.290 [2024-11-04 14:45:52.173959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.290 [2024-11-04 14:45:52.173961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3740) on tqpair=0x236f750 00:19:43.290 [2024-11-04 14:45:52.173965] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:19:43.291 [2024-11-04 14:45:52.173968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:19:43.291 [2024-11-04 14:45:52.173973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:43.291 [2024-11-04 14:45:52.174076] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:19:43.291 [2024-11-04 14:45:52.174080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:43.291 [2024-11-04 14:45:52.174086] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174089] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f750) 00:19:43.291 [2024-11-04 14:45:52.174096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.291 [2024-11-04 14:45:52.174106] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3740, cid 0, qid 0 00:19:43.291 [2024-11-04 14:45:52.174136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.291 [2024-11-04 14:45:52.174141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.291 [2024-11-04 14:45:52.174143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174146] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3740) on tqpair=0x236f750 00:19:43.291 [2024-11-04 14:45:52.174149] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:43.291 [2024-11-04 14:45:52.174156] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174158] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174161] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f750) 00:19:43.291 [2024-11-04 14:45:52.174166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.291 [2024-11-04 14:45:52.174176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3740, cid 0, qid 0 00:19:43.291 [2024-11-04 14:45:52.174203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.291 [2024-11-04 14:45:52.174208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.291 [2024-11-04 14:45:52.174210] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3740) on tqpair=0x236f750 00:19:43.291 [2024-11-04 14:45:52.174217] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:43.291 [2024-11-04 14:45:52.174220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:19:43.291 [2024-11-04 14:45:52.174225] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:19:43.291 [2024-11-04 14:45:52.174231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:19:43.291 [2024-11-04 14:45:52.174238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f750) 00:19:43.291 [2024-11-04 14:45:52.174247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.291 [2024-11-04 14:45:52.174256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3740, cid 0, qid 0 00:19:43.291 [2024-11-04 14:45:52.174306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:43.291 [2024-11-04 14:45:52.174311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:43.291 [2024-11-04 14:45:52.174314] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174316] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x236f750): datao=0, datal=4096, cccid=0 00:19:43.291 [2024-11-04 14:45:52.174319] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23d3740) on tqpair(0x236f750): expected_datao=0, payload_size=4096 00:19:43.291 [2024-11-04 14:45:52.174322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174329] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174332] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.291 [2024-11-04 14:45:52.174343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.291 [2024-11-04 14:45:52.174345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174347] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3740) on tqpair=0x236f750 00:19:43.291 [2024-11-04 14:45:52.174354] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:19:43.291 [2024-11-04 14:45:52.174357] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:19:43.291 [2024-11-04 14:45:52.174360] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:19:43.291 [2024-11-04 14:45:52.174364] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:19:43.291 [2024-11-04 14:45:52.174367] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:19:43.291 [2024-11-04 14:45:52.174370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:19:43.291 [2024-11-04 14:45:52.174378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:19:43.291 [2024-11-04 14:45:52.174383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f750) 00:19:43.291 [2024-11-04 14:45:52.174393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:43.291 [2024-11-04 14:45:52.174404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3740, cid 0, qid 0 00:19:43.291 [2024-11-04 14:45:52.174439] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.291 [2024-11-04 14:45:52.174444] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.291 [2024-11-04 14:45:52.174446] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3740) on tqpair=0x236f750 00:19:43.291 [2024-11-04 14:45:52.174455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f750) 00:19:43.291 [2024-11-04 14:45:52.174464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.291 [2024-11-04 14:45:52.174469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x236f750) 00:19:43.291 [2024-11-04 14:45:52.174478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.291 [2024-11-04 14:45:52.174483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174488] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x236f750) 00:19:43.291 [2024-11-04 14:45:52.174492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.291 [2024-11-04 14:45:52.174496] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.291 [2024-11-04 14:45:52.174506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.291 [2024-11-04 14:45:52.174510] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:43.291 [2024-11-04 14:45:52.174517] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:43.291 [2024-11-04 14:45:52.174522] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x236f750) 00:19:43.291 [2024-11-04 14:45:52.174530] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.291 [2024-11-04 14:45:52.174541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3740, cid 0, qid 0 00:19:43.291 [2024-11-04 14:45:52.174545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d38c0, cid 1, qid 0 00:19:43.291 [2024-11-04 14:45:52.174549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3a40, cid 2, qid 0 00:19:43.291 [2024-11-04 14:45:52.174552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.291 [2024-11-04 14:45:52.174556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3d40, cid 4, qid 0 00:19:43.291 [2024-11-04 14:45:52.174622] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.291 [2024-11-04 14:45:52.174627] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.291 [2024-11-04 14:45:52.174629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3d40) on tqpair=0x236f750 00:19:43.291 [2024-11-04 14:45:52.174636] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:19:43.291 [2024-11-04 14:45:52.174639] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:19:43.291 [2024-11-04 14:45:52.174647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.291 [2024-11-04 14:45:52.174650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x236f750) 00:19:43.291 [2024-11-04 14:45:52.174655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.291 [2024-11-04 14:45:52.174666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3d40, cid 4, qid 0 00:19:43.291 [2024-11-04 14:45:52.174699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:43.291 [2024-11-04 14:45:52.174704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:43.291 [2024-11-04 14:45:52.174706] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.174709] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x236f750): datao=0, datal=4096, cccid=4 00:19:43.292 [2024-11-04 14:45:52.174712] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23d3d40) on tqpair(0x236f750): expected_datao=0, payload_size=4096 00:19:43.292 [2024-11-04 14:45:52.174715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.174720] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.174722] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.174729] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.292 [2024-11-04 14:45:52.174733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.292 [2024-11-04 14:45:52.174735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.174738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3d40) on tqpair=0x236f750 00:19:43.292 [2024-11-04 14:45:52.174747] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:19:43.292 [2024-11-04 14:45:52.174767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.174770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x236f750) 00:19:43.292 [2024-11-04 14:45:52.174775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.292 [2024-11-04 14:45:52.174780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.174783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.174785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x236f750) 00:19:43.292 [2024-11-04 14:45:52.174790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.292 [2024-11-04 14:45:52.174805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3d40, cid 4, qid 0 00:19:43.292 [2024-11-04 14:45:52.174810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3ec0, cid 5, qid 0 00:19:43.292 [2024-11-04 14:45:52.174873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:43.292 [2024-11-04 14:45:52.174878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:43.292 [2024-11-04 14:45:52.174880] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.174882] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x236f750): datao=0, datal=1024, cccid=4 00:19:43.292 [2024-11-04 14:45:52.174885] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23d3d40) on tqpair(0x236f750): expected_datao=0, payload_size=1024 00:19:43.292 [2024-11-04 14:45:52.174888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.174893] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.174896] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.174900] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.292 [2024-11-04 14:45:52.174905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.292 [2024-11-04 14:45:52.174907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.174910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3ec0) on tqpair=0x236f750 00:19:43.292 [2024-11-04 14:45:52.174921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.292 [2024-11-04 14:45:52.174926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.292 [2024-11-04 14:45:52.174928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.174931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3d40) on tqpair=0x236f750 00:19:43.292 [2024-11-04 14:45:52.174938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.174941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x236f750) 00:19:43.292 [2024-11-04 14:45:52.174946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.292 [2024-11-04 14:45:52.174958] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3d40, cid 4, qid 0 00:19:43.292 [2024-11-04 14:45:52.174995] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:43.292 [2024-11-04 14:45:52.175000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:43.292 [2024-11-04 14:45:52.175002] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.175004] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x236f750): datao=0, datal=3072, cccid=4 00:19:43.292 [2024-11-04 14:45:52.175007] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23d3d40) on tqpair(0x236f750): expected_datao=0, payload_size=3072 00:19:43.292 [2024-11-04 14:45:52.175010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.175015] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.175017] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.175023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.292 [2024-11-04 14:45:52.175028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.292 [2024-11-04 14:45:52.175030] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.175032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3d40) on tqpair=0x236f750 00:19:43.292 [2024-11-04 14:45:52.175039] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.175041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x236f750) 00:19:43.292 [2024-11-04 14:45:52.175046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.292 [2024-11-04 14:45:52.175059] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3d40, cid 4, qid 0 00:19:43.292 [2024-11-04 14:45:52.175098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:43.292 [2024-11-04 14:45:52.175104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:43.292 [2024-11-04 14:45:52.175106] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.175108] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x236f750): datao=0, datal=8, cccid=4 00:19:43.292 [2024-11-04 14:45:52.175111] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23d3d40) on tqpair(0x236f750): expected_datao=0, payload_size=8 00:19:43.292 [2024-11-04 14:45:52.175114] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.175119] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.175122] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.175131] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.292 [2024-11-04 14:45:52.175136] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.292 [2024-11-04 14:45:52.175138] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.292 [2024-11-04 14:45:52.175141] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3d40) on tqpair=0x236f750 00:19:43.292 ===================================================== 00:19:43.292 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:43.292 ===================================================== 00:19:43.292 Controller Capabilities/Features 00:19:43.292 ================================ 00:19:43.292 Vendor ID: 0000 00:19:43.292 Subsystem Vendor ID: 0000 00:19:43.292 Serial Number: .................... 00:19:43.292 Model Number: ........................................ 00:19:43.292 Firmware Version: 25.01 00:19:43.292 Recommended Arb Burst: 0 00:19:43.292 IEEE OUI Identifier: 00 00 00 00:19:43.292 Multi-path I/O 00:19:43.292 May have multiple subsystem ports: No 00:19:43.292 May have multiple controllers: No 00:19:43.292 Associated with SR-IOV VF: No 00:19:43.292 Max Data Transfer Size: 131072 00:19:43.292 Max Number of Namespaces: 0 00:19:43.292 Max Number of I/O Queues: 1024 00:19:43.292 NVMe Specification Version (VS): 1.3 00:19:43.292 NVMe Specification Version (Identify): 1.3 00:19:43.292 Maximum Queue Entries: 128 00:19:43.292 Contiguous Queues Required: Yes 00:19:43.292 Arbitration Mechanisms Supported 00:19:43.292 Weighted Round Robin: Not Supported 00:19:43.292 Vendor Specific: Not Supported 00:19:43.292 Reset Timeout: 15000 ms 00:19:43.292 Doorbell Stride: 4 bytes 00:19:43.292 NVM Subsystem Reset: Not Supported 00:19:43.292 Command Sets Supported 00:19:43.292 NVM Command Set: Supported 00:19:43.292 Boot Partition: Not Supported 00:19:43.292 Memory Page Size Minimum: 4096 bytes 00:19:43.292 Memory Page Size Maximum: 4096 bytes 00:19:43.292 Persistent Memory Region: Not Supported 00:19:43.292 Optional Asynchronous Events Supported 00:19:43.292 Namespace Attribute Notices: Not Supported 00:19:43.292 Firmware Activation Notices: Not Supported 00:19:43.292 ANA Change Notices: Not Supported 00:19:43.292 PLE Aggregate Log Change Notices: Not Supported 00:19:43.292 LBA Status Info Alert Notices: Not Supported 00:19:43.292 EGE Aggregate Log Change Notices: Not Supported 00:19:43.292 Normal NVM Subsystem Shutdown event: Not Supported 00:19:43.292 Zone Descriptor Change Notices: Not Supported 00:19:43.292 Discovery Log Change Notices: Supported 00:19:43.292 Controller Attributes 00:19:43.292 128-bit Host Identifier: Not Supported 00:19:43.292 Non-Operational Permissive Mode: Not Supported 00:19:43.292 NVM Sets: Not Supported 00:19:43.292 Read Recovery Levels: Not Supported 00:19:43.292 Endurance Groups: Not Supported 00:19:43.292 Predictable Latency Mode: Not Supported 00:19:43.292 Traffic Based Keep ALive: Not Supported 00:19:43.292 Namespace Granularity: Not Supported 00:19:43.292 SQ Associations: Not Supported 00:19:43.292 UUID List: Not Supported 00:19:43.292 Multi-Domain Subsystem: Not Supported 00:19:43.292 Fixed Capacity Management: Not Supported 00:19:43.292 Variable Capacity Management: Not Supported 00:19:43.292 Delete Endurance Group: Not Supported 00:19:43.292 Delete NVM Set: Not Supported 00:19:43.292 Extended LBA Formats Supported: Not Supported 00:19:43.292 Flexible Data Placement Supported: Not Supported 00:19:43.292 00:19:43.292 Controller Memory Buffer Support 00:19:43.293 ================================ 00:19:43.293 Supported: No 00:19:43.293 00:19:43.293 Persistent Memory Region Support 00:19:43.293 ================================ 00:19:43.293 Supported: No 00:19:43.293 00:19:43.293 Admin Command Set Attributes 00:19:43.293 ============================ 00:19:43.293 Security Send/Receive: Not Supported 00:19:43.293 Format NVM: Not Supported 00:19:43.293 Firmware Activate/Download: Not Supported 00:19:43.293 Namespace Management: Not Supported 00:19:43.293 Device Self-Test: Not Supported 00:19:43.293 Directives: Not Supported 00:19:43.293 NVMe-MI: Not Supported 00:19:43.293 Virtualization Management: Not Supported 00:19:43.293 Doorbell Buffer Config: Not Supported 00:19:43.293 Get LBA Status Capability: Not Supported 00:19:43.293 Command & Feature Lockdown Capability: Not Supported 00:19:43.293 Abort Command Limit: 1 00:19:43.293 Async Event Request Limit: 4 00:19:43.293 Number of Firmware Slots: N/A 00:19:43.293 Firmware Slot 1 Read-Only: N/A 00:19:43.293 Firmware Activation Without Reset: N/A 00:19:43.293 Multiple Update Detection Support: N/A 00:19:43.293 Firmware Update Granularity: No Information Provided 00:19:43.293 Per-Namespace SMART Log: No 00:19:43.293 Asymmetric Namespace Access Log Page: Not Supported 00:19:43.293 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:43.293 Command Effects Log Page: Not Supported 00:19:43.293 Get Log Page Extended Data: Supported 00:19:43.293 Telemetry Log Pages: Not Supported 00:19:43.293 Persistent Event Log Pages: Not Supported 00:19:43.293 Supported Log Pages Log Page: May Support 00:19:43.293 Commands Supported & Effects Log Page: Not Supported 00:19:43.293 Feature Identifiers & Effects Log Page:May Support 00:19:43.293 NVMe-MI Commands & Effects Log Page: May Support 00:19:43.293 Data Area 4 for Telemetry Log: Not Supported 00:19:43.293 Error Log Page Entries Supported: 128 00:19:43.293 Keep Alive: Not Supported 00:19:43.293 00:19:43.293 NVM Command Set Attributes 00:19:43.293 ========================== 00:19:43.293 Submission Queue Entry Size 00:19:43.293 Max: 1 00:19:43.293 Min: 1 00:19:43.293 Completion Queue Entry Size 00:19:43.293 Max: 1 00:19:43.293 Min: 1 00:19:43.293 Number of Namespaces: 0 00:19:43.293 Compare Command: Not Supported 00:19:43.293 Write Uncorrectable Command: Not Supported 00:19:43.293 Dataset Management Command: Not Supported 00:19:43.293 Write Zeroes Command: Not Supported 00:19:43.293 Set Features Save Field: Not Supported 00:19:43.293 Reservations: Not Supported 00:19:43.293 Timestamp: Not Supported 00:19:43.293 Copy: Not Supported 00:19:43.293 Volatile Write Cache: Not Present 00:19:43.293 Atomic Write Unit (Normal): 1 00:19:43.293 Atomic Write Unit (PFail): 1 00:19:43.293 Atomic Compare & Write Unit: 1 00:19:43.293 Fused Compare & Write: Supported 00:19:43.293 Scatter-Gather List 00:19:43.293 SGL Command Set: Supported 00:19:43.293 SGL Keyed: Supported 00:19:43.293 SGL Bit Bucket Descriptor: Not Supported 00:19:43.293 SGL Metadata Pointer: Not Supported 00:19:43.293 Oversized SGL: Not Supported 00:19:43.293 SGL Metadata Address: Not Supported 00:19:43.293 SGL Offset: Supported 00:19:43.293 Transport SGL Data Block: Not Supported 00:19:43.293 Replay Protected Memory Block: Not Supported 00:19:43.293 00:19:43.293 Firmware Slot Information 00:19:43.293 ========================= 00:19:43.293 Active slot: 0 00:19:43.293 00:19:43.293 00:19:43.293 Error Log 00:19:43.293 ========= 00:19:43.293 00:19:43.293 Active Namespaces 00:19:43.293 ================= 00:19:43.293 Discovery Log Page 00:19:43.293 ================== 00:19:43.293 Generation Counter: 2 00:19:43.293 Number of Records: 2 00:19:43.293 Record Format: 0 00:19:43.293 00:19:43.293 Discovery Log Entry 0 00:19:43.293 ---------------------- 00:19:43.293 Transport Type: 3 (TCP) 00:19:43.293 Address Family: 1 (IPv4) 00:19:43.293 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:43.293 Entry Flags: 00:19:43.293 Duplicate Returned Information: 1 00:19:43.293 Explicit Persistent Connection Support for Discovery: 1 00:19:43.293 Transport Requirements: 00:19:43.293 Secure Channel: Not Required 00:19:43.293 Port ID: 0 (0x0000) 00:19:43.293 Controller ID: 65535 (0xffff) 00:19:43.293 Admin Max SQ Size: 128 00:19:43.293 Transport Service Identifier: 4420 00:19:43.293 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:43.293 Transport Address: 10.0.0.3 00:19:43.293 Discovery Log Entry 1 00:19:43.293 ---------------------- 00:19:43.293 Transport Type: 3 (TCP) 00:19:43.293 Address Family: 1 (IPv4) 00:19:43.293 Subsystem Type: 2 (NVM Subsystem) 00:19:43.293 Entry Flags: 00:19:43.293 Duplicate Returned Information: 0 00:19:43.293 Explicit Persistent Connection Support for Discovery: 0 00:19:43.293 Transport Requirements: 00:19:43.293 Secure Channel: Not Required 00:19:43.293 Port ID: 0 (0x0000) 00:19:43.293 Controller ID: 65535 (0xffff) 00:19:43.293 Admin Max SQ Size: 128 00:19:43.293 Transport Service Identifier: 4420 00:19:43.293 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:43.293 Transport Address: 10.0.0.3 [2024-11-04 14:45:52.175212] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:19:43.293 [2024-11-04 14:45:52.175219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3740) on tqpair=0x236f750 00:19:43.293 [2024-11-04 14:45:52.175224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.293 [2024-11-04 14:45:52.175228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d38c0) on tqpair=0x236f750 00:19:43.293 [2024-11-04 14:45:52.175231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.293 [2024-11-04 14:45:52.175235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3a40) on tqpair=0x236f750 00:19:43.293 [2024-11-04 14:45:52.175238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.293 [2024-11-04 14:45:52.175242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.293 [2024-11-04 14:45:52.175245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.293 [2024-11-04 14:45:52.175251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.293 [2024-11-04 14:45:52.175253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.293 [2024-11-04 14:45:52.175256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.293 [2024-11-04 14:45:52.175261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.293 [2024-11-04 14:45:52.175273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.293 [2024-11-04 14:45:52.175303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.293 [2024-11-04 14:45:52.175308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.293 [2024-11-04 14:45:52.175310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.293 [2024-11-04 14:45:52.175313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.293 [2024-11-04 14:45:52.175318] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.293 [2024-11-04 14:45:52.175321] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.293 [2024-11-04 14:45:52.175323] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.293 [2024-11-04 14:45:52.175328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.293 [2024-11-04 14:45:52.175340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.293 [2024-11-04 14:45:52.175374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.293 [2024-11-04 14:45:52.175379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.293 [2024-11-04 14:45:52.175381] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.293 [2024-11-04 14:45:52.175384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.293 [2024-11-04 14:45:52.175388] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:19:43.293 [2024-11-04 14:45:52.175391] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:19:43.293 [2024-11-04 14:45:52.175398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.293 [2024-11-04 14:45:52.175400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.293 [2024-11-04 14:45:52.175403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.293 [2024-11-04 14:45:52.175408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.293 [2024-11-04 14:45:52.175418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.293 [2024-11-04 14:45:52.175450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.293 [2024-11-04 14:45:52.175455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.293 [2024-11-04 14:45:52.175457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.293 [2024-11-04 14:45:52.175459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.293 [2024-11-04 14:45:52.175467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175472] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.294 [2024-11-04 14:45:52.175478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.294 [2024-11-04 14:45:52.175487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.294 [2024-11-04 14:45:52.175516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.294 [2024-11-04 14:45:52.175521] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.294 [2024-11-04 14:45:52.175523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.294 [2024-11-04 14:45:52.175533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.294 [2024-11-04 14:45:52.175543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.294 [2024-11-04 14:45:52.175553] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.294 [2024-11-04 14:45:52.175584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.294 [2024-11-04 14:45:52.175589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.294 [2024-11-04 14:45:52.175591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.294 [2024-11-04 14:45:52.175601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.294 [2024-11-04 14:45:52.175622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.294 [2024-11-04 14:45:52.175633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.294 [2024-11-04 14:45:52.175669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.294 [2024-11-04 14:45:52.175674] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.294 [2024-11-04 14:45:52.175676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.294 [2024-11-04 14:45:52.175687] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.294 [2024-11-04 14:45:52.175697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.294 [2024-11-04 14:45:52.175707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.294 [2024-11-04 14:45:52.175738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.294 [2024-11-04 14:45:52.175742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.294 [2024-11-04 14:45:52.175745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.294 [2024-11-04 14:45:52.175755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.294 [2024-11-04 14:45:52.175765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.294 [2024-11-04 14:45:52.175774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.294 [2024-11-04 14:45:52.175801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.294 [2024-11-04 14:45:52.175805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.294 [2024-11-04 14:45:52.175808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.294 [2024-11-04 14:45:52.175818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.294 [2024-11-04 14:45:52.175828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.294 [2024-11-04 14:45:52.175838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.294 [2024-11-04 14:45:52.175869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.294 [2024-11-04 14:45:52.175874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.294 [2024-11-04 14:45:52.175877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.294 [2024-11-04 14:45:52.175887] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.294 [2024-11-04 14:45:52.175897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.294 [2024-11-04 14:45:52.175907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.294 [2024-11-04 14:45:52.175933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.294 [2024-11-04 14:45:52.175938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.294 [2024-11-04 14:45:52.175940] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.294 [2024-11-04 14:45:52.175950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.175955] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.294 [2024-11-04 14:45:52.175960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.294 [2024-11-04 14:45:52.175970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.294 [2024-11-04 14:45:52.175998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.294 [2024-11-04 14:45:52.176003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.294 [2024-11-04 14:45:52.176005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.176008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.294 [2024-11-04 14:45:52.176016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.176018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.176021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.294 [2024-11-04 14:45:52.176026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.294 [2024-11-04 14:45:52.176036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.294 [2024-11-04 14:45:52.176065] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.294 [2024-11-04 14:45:52.176070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.294 [2024-11-04 14:45:52.176072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.176075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.294 [2024-11-04 14:45:52.176082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.176085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.176087] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.294 [2024-11-04 14:45:52.176092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.294 [2024-11-04 14:45:52.176102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.294 [2024-11-04 14:45:52.176133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.294 [2024-11-04 14:45:52.176145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.294 [2024-11-04 14:45:52.176148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.176151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.294 [2024-11-04 14:45:52.176158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.176161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.176163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.294 [2024-11-04 14:45:52.176169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.294 [2024-11-04 14:45:52.176179] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.294 [2024-11-04 14:45:52.176208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.294 [2024-11-04 14:45:52.176217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.294 [2024-11-04 14:45:52.176219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.176222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.294 [2024-11-04 14:45:52.176229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.176232] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.294 [2024-11-04 14:45:52.176235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.295 [2024-11-04 14:45:52.176240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.295 [2024-11-04 14:45:52.176250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.295 [2024-11-04 14:45:52.176279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.295 [2024-11-04 14:45:52.176283] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.295 [2024-11-04 14:45:52.176286] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.295 [2024-11-04 14:45:52.176296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176301] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.295 [2024-11-04 14:45:52.176307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.295 [2024-11-04 14:45:52.176316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.295 [2024-11-04 14:45:52.176343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.295 [2024-11-04 14:45:52.176351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.295 [2024-11-04 14:45:52.176354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.295 [2024-11-04 14:45:52.176364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.295 [2024-11-04 14:45:52.176375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.295 [2024-11-04 14:45:52.176385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.295 [2024-11-04 14:45:52.176411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.295 [2024-11-04 14:45:52.176416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.295 [2024-11-04 14:45:52.176418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.295 [2024-11-04 14:45:52.176428] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.295 [2024-11-04 14:45:52.176438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.295 [2024-11-04 14:45:52.176448] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.295 [2024-11-04 14:45:52.176477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.295 [2024-11-04 14:45:52.176481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.295 [2024-11-04 14:45:52.176484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.295 [2024-11-04 14:45:52.176494] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.295 [2024-11-04 14:45:52.176504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.295 [2024-11-04 14:45:52.176513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.295 [2024-11-04 14:45:52.176540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.295 [2024-11-04 14:45:52.176544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.295 [2024-11-04 14:45:52.176547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.295 [2024-11-04 14:45:52.176557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.295 [2024-11-04 14:45:52.176567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.295 [2024-11-04 14:45:52.176576] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.295 [2024-11-04 14:45:52.176617] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.295 [2024-11-04 14:45:52.176622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.295 [2024-11-04 14:45:52.176624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.295 [2024-11-04 14:45:52.176634] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176637] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176640] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.295 [2024-11-04 14:45:52.176645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.295 [2024-11-04 14:45:52.176655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.295 [2024-11-04 14:45:52.176682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.295 [2024-11-04 14:45:52.176687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.295 [2024-11-04 14:45:52.176689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.295 [2024-11-04 14:45:52.176699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.295 [2024-11-04 14:45:52.176710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.295 [2024-11-04 14:45:52.176719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.295 [2024-11-04 14:45:52.176750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.295 [2024-11-04 14:45:52.176755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.295 [2024-11-04 14:45:52.176757] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.295 [2024-11-04 14:45:52.176767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.295 [2024-11-04 14:45:52.176778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.295 [2024-11-04 14:45:52.176787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.295 [2024-11-04 14:45:52.176816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.295 [2024-11-04 14:45:52.176821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.295 [2024-11-04 14:45:52.176823] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.295 [2024-11-04 14:45:52.176833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.295 [2024-11-04 14:45:52.176843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.295 [2024-11-04 14:45:52.176853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.295 [2024-11-04 14:45:52.176882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.295 [2024-11-04 14:45:52.176886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.295 [2024-11-04 14:45:52.176889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176892] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.295 [2024-11-04 14:45:52.176899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176902] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.295 [2024-11-04 14:45:52.176904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.295 [2024-11-04 14:45:52.176909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.295 [2024-11-04 14:45:52.176919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.295 [2024-11-04 14:45:52.176950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.295 [2024-11-04 14:45:52.176955] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.296 [2024-11-04 14:45:52.176957] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.176960] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.296 [2024-11-04 14:45:52.176967] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.176970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.176972] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.296 [2024-11-04 14:45:52.176986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.296 [2024-11-04 14:45:52.176997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.296 [2024-11-04 14:45:52.177031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.296 [2024-11-04 14:45:52.177035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.296 [2024-11-04 14:45:52.177038] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177041] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.296 [2024-11-04 14:45:52.177048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177051] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177053] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.296 [2024-11-04 14:45:52.177059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.296 [2024-11-04 14:45:52.177068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.296 [2024-11-04 14:45:52.177097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.296 [2024-11-04 14:45:52.177105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.296 [2024-11-04 14:45:52.177108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.296 [2024-11-04 14:45:52.177118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.296 [2024-11-04 14:45:52.177129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.296 [2024-11-04 14:45:52.177139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.296 [2024-11-04 14:45:52.177167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.296 [2024-11-04 14:45:52.177172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.296 [2024-11-04 14:45:52.177174] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.296 [2024-11-04 14:45:52.177184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.296 [2024-11-04 14:45:52.177195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.296 [2024-11-04 14:45:52.177204] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.296 [2024-11-04 14:45:52.177233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.296 [2024-11-04 14:45:52.177241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.296 [2024-11-04 14:45:52.177244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.296 [2024-11-04 14:45:52.177254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.296 [2024-11-04 14:45:52.177265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.296 [2024-11-04 14:45:52.177274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.296 [2024-11-04 14:45:52.177306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.296 [2024-11-04 14:45:52.177311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.296 [2024-11-04 14:45:52.177313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.296 [2024-11-04 14:45:52.177323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.296 [2024-11-04 14:45:52.177334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.296 [2024-11-04 14:45:52.177343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.296 [2024-11-04 14:45:52.177374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.296 [2024-11-04 14:45:52.177380] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.296 [2024-11-04 14:45:52.177383] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177385] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.296 [2024-11-04 14:45:52.177393] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.296 [2024-11-04 14:45:52.177403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.296 [2024-11-04 14:45:52.177413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.296 [2024-11-04 14:45:52.177443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.296 [2024-11-04 14:45:52.177448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.296 [2024-11-04 14:45:52.177450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.296 [2024-11-04 14:45:52.177460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.296 [2024-11-04 14:45:52.177470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.296 [2024-11-04 14:45:52.177480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.296 [2024-11-04 14:45:52.177511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.296 [2024-11-04 14:45:52.177519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.296 [2024-11-04 14:45:52.177522] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177524] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.296 [2024-11-04 14:45:52.177532] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177537] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.296 [2024-11-04 14:45:52.177542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.296 [2024-11-04 14:45:52.177552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.296 [2024-11-04 14:45:52.177578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.296 [2024-11-04 14:45:52.177583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.296 [2024-11-04 14:45:52.177585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.296 [2024-11-04 14:45:52.177595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.177600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f750) 00:19:43.296 [2024-11-04 14:45:52.181614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.296 [2024-11-04 14:45:52.181635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d3bc0, cid 3, qid 0 00:19:43.296 [2024-11-04 14:45:52.181668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.296 [2024-11-04 14:45:52.181673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.296 [2024-11-04 14:45:52.181675] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.296 [2024-11-04 14:45:52.181678] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d3bc0) on tqpair=0x236f750 00:19:43.296 [2024-11-04 14:45:52.181683] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:19:43.296 00:19:43.296 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:43.296 [2024-11-04 14:45:52.215373] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:19:43.296 [2024-11-04 14:45:52.215402] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72740 ] 00:19:43.296 [2024-11-04 14:45:52.368073] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:19:43.296 [2024-11-04 14:45:52.368129] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:43.296 [2024-11-04 14:45:52.368133] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:43.296 [2024-11-04 14:45:52.368143] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:43.296 [2024-11-04 14:45:52.368151] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:43.296 [2024-11-04 14:45:52.368402] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:19:43.297 [2024-11-04 14:45:52.368438] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x14ef750 0 00:19:43.297 [2024-11-04 14:45:52.375621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:43.297 [2024-11-04 14:45:52.375636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:43.297 [2024-11-04 14:45:52.375639] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:43.297 [2024-11-04 14:45:52.375642] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:43.297 [2024-11-04 14:45:52.375664] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.375668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.375671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ef750) 00:19:43.297 [2024-11-04 14:45:52.375682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:43.297 [2024-11-04 14:45:52.375701] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553740, cid 0, qid 0 00:19:43.297 [2024-11-04 14:45:52.383619] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.297 [2024-11-04 14:45:52.383631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.297 [2024-11-04 14:45:52.383634] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.383637] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553740) on tqpair=0x14ef750 00:19:43.297 [2024-11-04 14:45:52.383644] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:43.297 [2024-11-04 14:45:52.383650] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:19:43.297 [2024-11-04 14:45:52.383654] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:19:43.297 [2024-11-04 14:45:52.383664] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.383666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.383669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ef750) 00:19:43.297 [2024-11-04 14:45:52.383676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.297 [2024-11-04 14:45:52.383692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553740, cid 0, qid 0 00:19:43.297 [2024-11-04 14:45:52.383737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.297 [2024-11-04 14:45:52.383742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.297 [2024-11-04 14:45:52.383744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.383747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553740) on tqpair=0x14ef750 00:19:43.297 [2024-11-04 14:45:52.383750] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:19:43.297 [2024-11-04 14:45:52.383755] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:19:43.297 [2024-11-04 14:45:52.383760] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.383763] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.383765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ef750) 00:19:43.297 [2024-11-04 14:45:52.383771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.297 [2024-11-04 14:45:52.383781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553740, cid 0, qid 0 00:19:43.297 [2024-11-04 14:45:52.383815] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.297 [2024-11-04 14:45:52.383819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.297 [2024-11-04 14:45:52.383822] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.383825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553740) on tqpair=0x14ef750 00:19:43.297 [2024-11-04 14:45:52.383828] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:19:43.297 [2024-11-04 14:45:52.383834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:19:43.297 [2024-11-04 14:45:52.383839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.383841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.383844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ef750) 00:19:43.297 [2024-11-04 14:45:52.383849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.297 [2024-11-04 14:45:52.383860] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553740, cid 0, qid 0 00:19:43.297 [2024-11-04 14:45:52.383896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.297 [2024-11-04 14:45:52.383900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.297 [2024-11-04 14:45:52.383903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.383905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553740) on tqpair=0x14ef750 00:19:43.297 [2024-11-04 14:45:52.383909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:43.297 [2024-11-04 14:45:52.383916] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.383919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.383921] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ef750) 00:19:43.297 [2024-11-04 14:45:52.383927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.297 [2024-11-04 14:45:52.383936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553740, cid 0, qid 0 00:19:43.297 [2024-11-04 14:45:52.383975] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.297 [2024-11-04 14:45:52.383980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.297 [2024-11-04 14:45:52.383982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.383985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553740) on tqpair=0x14ef750 00:19:43.297 [2024-11-04 14:45:52.383989] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:19:43.297 [2024-11-04 14:45:52.383992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:19:43.297 [2024-11-04 14:45:52.383997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:43.297 [2024-11-04 14:45:52.384102] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:19:43.297 [2024-11-04 14:45:52.384105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:43.297 [2024-11-04 14:45:52.384111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.384114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.384116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ef750) 00:19:43.297 [2024-11-04 14:45:52.384121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.297 [2024-11-04 14:45:52.384131] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553740, cid 0, qid 0 00:19:43.297 [2024-11-04 14:45:52.384168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.297 [2024-11-04 14:45:52.384173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.297 [2024-11-04 14:45:52.384175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.384177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553740) on tqpair=0x14ef750 00:19:43.297 [2024-11-04 14:45:52.384181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:43.297 [2024-11-04 14:45:52.384188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.384191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.384193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ef750) 00:19:43.297 [2024-11-04 14:45:52.384198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.297 [2024-11-04 14:45:52.384208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553740, cid 0, qid 0 00:19:43.297 [2024-11-04 14:45:52.384242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.297 [2024-11-04 14:45:52.384247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.297 [2024-11-04 14:45:52.384249] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.384252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553740) on tqpair=0x14ef750 00:19:43.297 [2024-11-04 14:45:52.384255] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:43.297 [2024-11-04 14:45:52.384259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:19:43.297 [2024-11-04 14:45:52.384264] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:19:43.297 [2024-11-04 14:45:52.384270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:19:43.297 [2024-11-04 14:45:52.384278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.384280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ef750) 00:19:43.297 [2024-11-04 14:45:52.384286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.297 [2024-11-04 14:45:52.384297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553740, cid 0, qid 0 00:19:43.297 [2024-11-04 14:45:52.384371] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:43.297 [2024-11-04 14:45:52.384382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:43.297 [2024-11-04 14:45:52.384385] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.384388] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14ef750): datao=0, datal=4096, cccid=0 00:19:43.297 [2024-11-04 14:45:52.384391] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1553740) on tqpair(0x14ef750): expected_datao=0, payload_size=4096 00:19:43.297 [2024-11-04 14:45:52.384394] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.384400] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.384403] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.384410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.297 [2024-11-04 14:45:52.384414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.297 [2024-11-04 14:45:52.384416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.297 [2024-11-04 14:45:52.384419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553740) on tqpair=0x14ef750 00:19:43.298 [2024-11-04 14:45:52.384425] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:19:43.298 [2024-11-04 14:45:52.384428] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:19:43.298 [2024-11-04 14:45:52.384431] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:19:43.298 [2024-11-04 14:45:52.384435] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:19:43.298 [2024-11-04 14:45:52.384438] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:19:43.298 [2024-11-04 14:45:52.384441] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:19:43.298 [2024-11-04 14:45:52.384450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:19:43.298 [2024-11-04 14:45:52.384455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.384458] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.384461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ef750) 00:19:43.298 [2024-11-04 14:45:52.384466] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:43.298 [2024-11-04 14:45:52.384477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553740, cid 0, qid 0 00:19:43.298 [2024-11-04 14:45:52.384521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.298 [2024-11-04 14:45:52.384530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.298 [2024-11-04 14:45:52.384532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.384535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553740) on tqpair=0x14ef750 00:19:43.298 [2024-11-04 14:45:52.384541] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.384543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.384546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ef750) 00:19:43.298 [2024-11-04 14:45:52.384551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.298 [2024-11-04 14:45:52.384556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.384558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.384561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x14ef750) 00:19:43.298 [2024-11-04 14:45:52.384565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.298 [2024-11-04 14:45:52.384570] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.384572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.384575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x14ef750) 00:19:43.298 [2024-11-04 14:45:52.384579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.298 [2024-11-04 14:45:52.384584] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.384586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.384589] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ef750) 00:19:43.298 [2024-11-04 14:45:52.384593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.298 [2024-11-04 14:45:52.384596] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:43.298 [2024-11-04 14:45:52.384604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:43.298 [2024-11-04 14:45:52.384618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.384620] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14ef750) 00:19:43.298 [2024-11-04 14:45:52.384626] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.298 [2024-11-04 14:45:52.384638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553740, cid 0, qid 0 00:19:43.298 [2024-11-04 14:45:52.384643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15538c0, cid 1, qid 0 00:19:43.298 [2024-11-04 14:45:52.384646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553a40, cid 2, qid 0 00:19:43.298 [2024-11-04 14:45:52.384650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553bc0, cid 3, qid 0 00:19:43.298 [2024-11-04 14:45:52.384653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553d40, cid 4, qid 0 00:19:43.298 [2024-11-04 14:45:52.384736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.298 [2024-11-04 14:45:52.384742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.298 [2024-11-04 14:45:52.384744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.384747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553d40) on tqpair=0x14ef750 00:19:43.298 [2024-11-04 14:45:52.384751] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:19:43.298 [2024-11-04 14:45:52.384755] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:43.298 [2024-11-04 14:45:52.384760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:19:43.298 [2024-11-04 14:45:52.384767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:43.298 [2024-11-04 14:45:52.384771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.384774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.384777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14ef750) 00:19:43.298 [2024-11-04 14:45:52.384782] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:43.298 [2024-11-04 14:45:52.384792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553d40, cid 4, qid 0 00:19:43.298 [2024-11-04 14:45:52.384834] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.298 [2024-11-04 14:45:52.384845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.298 [2024-11-04 14:45:52.384847] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.384850] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553d40) on tqpair=0x14ef750 00:19:43.298 [2024-11-04 14:45:52.384909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:19:43.298 [2024-11-04 14:45:52.384950] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:43.298 [2024-11-04 14:45:52.384956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.384959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14ef750) 00:19:43.298 [2024-11-04 14:45:52.384964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.298 [2024-11-04 14:45:52.384974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553d40, cid 4, qid 0 00:19:43.298 [2024-11-04 14:45:52.385028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:43.298 [2024-11-04 14:45:52.385033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:43.298 [2024-11-04 14:45:52.385035] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.385038] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14ef750): datao=0, datal=4096, cccid=4 00:19:43.298 [2024-11-04 14:45:52.385041] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1553d40) on tqpair(0x14ef750): expected_datao=0, payload_size=4096 00:19:43.298 [2024-11-04 14:45:52.385044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.385049] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.385052] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.385058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.298 [2024-11-04 14:45:52.385062] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.298 [2024-11-04 14:45:52.385064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.385067] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553d40) on tqpair=0x14ef750 00:19:43.298 [2024-11-04 14:45:52.385076] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:19:43.298 [2024-11-04 14:45:52.385083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:19:43.298 [2024-11-04 14:45:52.385090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:19:43.298 [2024-11-04 14:45:52.385095] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.385097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14ef750) 00:19:43.298 [2024-11-04 14:45:52.385102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.298 [2024-11-04 14:45:52.385114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553d40, cid 4, qid 0 00:19:43.298 [2024-11-04 14:45:52.385190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:43.298 [2024-11-04 14:45:52.385225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:43.298 [2024-11-04 14:45:52.385227] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.385230] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14ef750): datao=0, datal=4096, cccid=4 00:19:43.298 [2024-11-04 14:45:52.385233] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1553d40) on tqpair(0x14ef750): expected_datao=0, payload_size=4096 00:19:43.298 [2024-11-04 14:45:52.385236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.385241] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.385244] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.385250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.298 [2024-11-04 14:45:52.385254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.298 [2024-11-04 14:45:52.385257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.298 [2024-11-04 14:45:52.385259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553d40) on tqpair=0x14ef750 00:19:43.298 [2024-11-04 14:45:52.385270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:43.298 [2024-11-04 14:45:52.385277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:43.299 [2024-11-04 14:45:52.385282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14ef750) 00:19:43.299 [2024-11-04 14:45:52.385290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.299 [2024-11-04 14:45:52.385301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553d40, cid 4, qid 0 00:19:43.299 [2024-11-04 14:45:52.385345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:43.299 [2024-11-04 14:45:52.385350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:43.299 [2024-11-04 14:45:52.385352] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385355] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14ef750): datao=0, datal=4096, cccid=4 00:19:43.299 [2024-11-04 14:45:52.385358] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1553d40) on tqpair(0x14ef750): expected_datao=0, payload_size=4096 00:19:43.299 [2024-11-04 14:45:52.385360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385366] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385368] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.299 [2024-11-04 14:45:52.385379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.299 [2024-11-04 14:45:52.385381] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553d40) on tqpair=0x14ef750 00:19:43.299 [2024-11-04 14:45:52.385389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:43.299 [2024-11-04 14:45:52.385395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:19:43.299 [2024-11-04 14:45:52.385403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:19:43.299 [2024-11-04 14:45:52.385408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:43.299 [2024-11-04 14:45:52.385412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:43.299 [2024-11-04 14:45:52.385415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:19:43.299 [2024-11-04 14:45:52.385419] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:19:43.299 [2024-11-04 14:45:52.385422] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:19:43.299 [2024-11-04 14:45:52.385426] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:19:43.299 [2024-11-04 14:45:52.385438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14ef750) 00:19:43.299 [2024-11-04 14:45:52.385446] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.299 [2024-11-04 14:45:52.385451] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14ef750) 00:19:43.299 [2024-11-04 14:45:52.385460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.299 [2024-11-04 14:45:52.385474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553d40, cid 4, qid 0 00:19:43.299 [2024-11-04 14:45:52.385478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553ec0, cid 5, qid 0 00:19:43.299 [2024-11-04 14:45:52.385532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.299 [2024-11-04 14:45:52.385537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.299 [2024-11-04 14:45:52.385540] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553d40) on tqpair=0x14ef750 00:19:43.299 [2024-11-04 14:45:52.385548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.299 [2024-11-04 14:45:52.385552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.299 [2024-11-04 14:45:52.385554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553ec0) on tqpair=0x14ef750 00:19:43.299 [2024-11-04 14:45:52.385564] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14ef750) 00:19:43.299 [2024-11-04 14:45:52.385572] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.299 [2024-11-04 14:45:52.385582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553ec0, cid 5, qid 0 00:19:43.299 [2024-11-04 14:45:52.385628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.299 [2024-11-04 14:45:52.385633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.299 [2024-11-04 14:45:52.385635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553ec0) on tqpair=0x14ef750 00:19:43.299 [2024-11-04 14:45:52.385645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14ef750) 00:19:43.299 [2024-11-04 14:45:52.385652] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.299 [2024-11-04 14:45:52.385663] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553ec0, cid 5, qid 0 00:19:43.299 [2024-11-04 14:45:52.385702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.299 [2024-11-04 14:45:52.385707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.299 [2024-11-04 14:45:52.385709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385712] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553ec0) on tqpair=0x14ef750 00:19:43.299 [2024-11-04 14:45:52.385718] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14ef750) 00:19:43.299 [2024-11-04 14:45:52.385726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.299 [2024-11-04 14:45:52.385735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553ec0, cid 5, qid 0 00:19:43.299 [2024-11-04 14:45:52.385777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.299 [2024-11-04 14:45:52.385783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.299 [2024-11-04 14:45:52.385785] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385788] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553ec0) on tqpair=0x14ef750 00:19:43.299 [2024-11-04 14:45:52.385799] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14ef750) 00:19:43.299 [2024-11-04 14:45:52.385807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.299 [2024-11-04 14:45:52.385813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14ef750) 00:19:43.299 [2024-11-04 14:45:52.385820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.299 [2024-11-04 14:45:52.385826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385829] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x14ef750) 00:19:43.299 [2024-11-04 14:45:52.385833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.299 [2024-11-04 14:45:52.385839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.385842] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x14ef750) 00:19:43.299 [2024-11-04 14:45:52.385847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.299 [2024-11-04 14:45:52.385858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553ec0, cid 5, qid 0 00:19:43.299 [2024-11-04 14:45:52.385862] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553d40, cid 4, qid 0 00:19:43.299 [2024-11-04 14:45:52.385865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1554040, cid 6, qid 0 00:19:43.299 [2024-11-04 14:45:52.385869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15541c0, cid 7, qid 0 00:19:43.299 [2024-11-04 14:45:52.385988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:43.299 [2024-11-04 14:45:52.385997] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:43.299 [2024-11-04 14:45:52.386000] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:43.299 [2024-11-04 14:45:52.386002] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14ef750): datao=0, datal=8192, cccid=5 00:19:43.299 [2024-11-04 14:45:52.386005] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1553ec0) on tqpair(0x14ef750): expected_datao=0, payload_size=8192 00:19:43.300 [2024-11-04 14:45:52.386008] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.300 [2024-11-04 14:45:52.386021] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:43.300 [2024-11-04 14:45:52.386023] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:43.300 [2024-11-04 14:45:52.386028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:43.300 [2024-11-04 14:45:52.386033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:43.300 [2024-11-04 14:45:52.386035] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:43.300 [2024-11-04 14:45:52.386037] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14ef750): datao=0, datal=512, cccid=4 00:19:43.300 [2024-11-04 14:45:52.386040] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1553d40) on tqpair(0x14ef750): expected_datao=0, payload_size=512 00:19:43.300 [2024-11-04 14:45:52.386043] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.300 [2024-11-04 14:45:52.386048] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:43.300 [2024-11-04 14:45:52.386051] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:43.300 [2024-11-04 14:45:52.386055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:43.300 [2024-11-04 14:45:52.386059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:43.300 [2024-11-04 14:45:52.386062] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:43.300 [2024-11-04 14:45:52.386064] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14ef750): datao=0, datal=512, cccid=6 00:19:43.300 [2024-11-04 14:45:52.386067] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1554040) on tqpair(0x14ef750): expected_datao=0, payload_size=512 00:19:43.300 [2024-11-04 14:45:52.386070] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.300 [2024-11-04 14:45:52.386075] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:43.300 [2024-11-04 14:45:52.386077] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:43.300 [2024-11-04 14:45:52.386082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:43.300 [2024-11-04 14:45:52.386086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:43.300 [2024-11-04 14:45:52.386088] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:43.300 [2024-11-04 14:45:52.386091] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14ef750): datao=0, datal=4096, cccid=7 00:19:43.300 [2024-11-04 14:45:52.386093] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15541c0) on tqpair(0x14ef750): expected_datao=0, payload_size=4096 00:19:43.300 [2024-11-04 14:45:52.386096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.300 [2024-11-04 14:45:52.386102] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:43.300 [2024-11-04 14:45:52.386104] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:43.300 [2024-11-04 14:45:52.386110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.300 [2024-11-04 14:45:52.386115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.300 [2024-11-04 14:45:52.386117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.300 [2024-11-04 14:45:52.386120] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553ec0) on tqpair=0x14ef750 00:19:43.300 [2024-11-04 14:45:52.386130] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.300 [2024-11-04 14:45:52.386135] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.300 [2024-11-04 14:45:52.386137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.300 [2024-11-04 14:45:52.386140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553d40) on tqpair=0x14ef750 00:19:43.300 [2024-11-04 14:45:52.386148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.300 [2024-11-04 14:45:52.386153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.300 [2024-11-04 14:45:52.386155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.300 [2024-11-04 14:45:52.386158] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1554040) on tqpair=0x14ef750 00:19:43.300 [2024-11-04 14:45:52.386164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.300 [2024-11-04 14:45:52.386168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.300 [2024-11-04 14:45:52.386170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.300 [2024-11-04 14:45:52.386173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15541c0) on tqpair=0x14ef750 00:19:43.300 ===================================================== 00:19:43.300 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:43.300 ===================================================== 00:19:43.300 Controller Capabilities/Features 00:19:43.300 ================================ 00:19:43.300 Vendor ID: 8086 00:19:43.300 Subsystem Vendor ID: 8086 00:19:43.300 Serial Number: SPDK00000000000001 00:19:43.300 Model Number: SPDK bdev Controller 00:19:43.300 Firmware Version: 25.01 00:19:43.300 Recommended Arb Burst: 6 00:19:43.300 IEEE OUI Identifier: e4 d2 5c 00:19:43.300 Multi-path I/O 00:19:43.300 May have multiple subsystem ports: Yes 00:19:43.300 May have multiple controllers: Yes 00:19:43.300 Associated with SR-IOV VF: No 00:19:43.300 Max Data Transfer Size: 131072 00:19:43.300 Max Number of Namespaces: 32 00:19:43.300 Max Number of I/O Queues: 127 00:19:43.300 NVMe Specification Version (VS): 1.3 00:19:43.300 NVMe Specification Version (Identify): 1.3 00:19:43.300 Maximum Queue Entries: 128 00:19:43.300 Contiguous Queues Required: Yes 00:19:43.300 Arbitration Mechanisms Supported 00:19:43.300 Weighted Round Robin: Not Supported 00:19:43.300 Vendor Specific: Not Supported 00:19:43.300 Reset Timeout: 15000 ms 00:19:43.300 Doorbell Stride: 4 bytes 00:19:43.300 NVM Subsystem Reset: Not Supported 00:19:43.300 Command Sets Supported 00:19:43.300 NVM Command Set: Supported 00:19:43.300 Boot Partition: Not Supported 00:19:43.300 Memory Page Size Minimum: 4096 bytes 00:19:43.300 Memory Page Size Maximum: 4096 bytes 00:19:43.300 Persistent Memory Region: Not Supported 00:19:43.300 Optional Asynchronous Events Supported 00:19:43.300 Namespace Attribute Notices: Supported 00:19:43.300 Firmware Activation Notices: Not Supported 00:19:43.300 ANA Change Notices: Not Supported 00:19:43.300 PLE Aggregate Log Change Notices: Not Supported 00:19:43.300 LBA Status Info Alert Notices: Not Supported 00:19:43.300 EGE Aggregate Log Change Notices: Not Supported 00:19:43.300 Normal NVM Subsystem Shutdown event: Not Supported 00:19:43.300 Zone Descriptor Change Notices: Not Supported 00:19:43.300 Discovery Log Change Notices: Not Supported 00:19:43.300 Controller Attributes 00:19:43.300 128-bit Host Identifier: Supported 00:19:43.300 Non-Operational Permissive Mode: Not Supported 00:19:43.300 NVM Sets: Not Supported 00:19:43.300 Read Recovery Levels: Not Supported 00:19:43.300 Endurance Groups: Not Supported 00:19:43.300 Predictable Latency Mode: Not Supported 00:19:43.300 Traffic Based Keep ALive: Not Supported 00:19:43.300 Namespace Granularity: Not Supported 00:19:43.300 SQ Associations: Not Supported 00:19:43.300 UUID List: Not Supported 00:19:43.300 Multi-Domain Subsystem: Not Supported 00:19:43.300 Fixed Capacity Management: Not Supported 00:19:43.300 Variable Capacity Management: Not Supported 00:19:43.300 Delete Endurance Group: Not Supported 00:19:43.300 Delete NVM Set: Not Supported 00:19:43.300 Extended LBA Formats Supported: Not Supported 00:19:43.300 Flexible Data Placement Supported: Not Supported 00:19:43.300 00:19:43.300 Controller Memory Buffer Support 00:19:43.300 ================================ 00:19:43.300 Supported: No 00:19:43.300 00:19:43.300 Persistent Memory Region Support 00:19:43.300 ================================ 00:19:43.300 Supported: No 00:19:43.300 00:19:43.300 Admin Command Set Attributes 00:19:43.300 ============================ 00:19:43.300 Security Send/Receive: Not Supported 00:19:43.300 Format NVM: Not Supported 00:19:43.300 Firmware Activate/Download: Not Supported 00:19:43.300 Namespace Management: Not Supported 00:19:43.300 Device Self-Test: Not Supported 00:19:43.300 Directives: Not Supported 00:19:43.300 NVMe-MI: Not Supported 00:19:43.300 Virtualization Management: Not Supported 00:19:43.300 Doorbell Buffer Config: Not Supported 00:19:43.300 Get LBA Status Capability: Not Supported 00:19:43.300 Command & Feature Lockdown Capability: Not Supported 00:19:43.300 Abort Command Limit: 4 00:19:43.300 Async Event Request Limit: 4 00:19:43.300 Number of Firmware Slots: N/A 00:19:43.300 Firmware Slot 1 Read-Only: N/A 00:19:43.300 Firmware Activation Without Reset: N/A 00:19:43.300 Multiple Update Detection Support: N/A 00:19:43.300 Firmware Update Granularity: No Information Provided 00:19:43.300 Per-Namespace SMART Log: No 00:19:43.300 Asymmetric Namespace Access Log Page: Not Supported 00:19:43.300 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:43.300 Command Effects Log Page: Supported 00:19:43.300 Get Log Page Extended Data: Supported 00:19:43.300 Telemetry Log Pages: Not Supported 00:19:43.300 Persistent Event Log Pages: Not Supported 00:19:43.300 Supported Log Pages Log Page: May Support 00:19:43.300 Commands Supported & Effects Log Page: Not Supported 00:19:43.300 Feature Identifiers & Effects Log Page:May Support 00:19:43.300 NVMe-MI Commands & Effects Log Page: May Support 00:19:43.300 Data Area 4 for Telemetry Log: Not Supported 00:19:43.300 Error Log Page Entries Supported: 128 00:19:43.300 Keep Alive: Supported 00:19:43.300 Keep Alive Granularity: 10000 ms 00:19:43.300 00:19:43.300 NVM Command Set Attributes 00:19:43.300 ========================== 00:19:43.300 Submission Queue Entry Size 00:19:43.300 Max: 64 00:19:43.300 Min: 64 00:19:43.300 Completion Queue Entry Size 00:19:43.300 Max: 16 00:19:43.300 Min: 16 00:19:43.300 Number of Namespaces: 32 00:19:43.300 Compare Command: Supported 00:19:43.300 Write Uncorrectable Command: Not Supported 00:19:43.300 Dataset Management Command: Supported 00:19:43.300 Write Zeroes Command: Supported 00:19:43.300 Set Features Save Field: Not Supported 00:19:43.300 Reservations: Supported 00:19:43.300 Timestamp: Not Supported 00:19:43.301 Copy: Supported 00:19:43.301 Volatile Write Cache: Present 00:19:43.301 Atomic Write Unit (Normal): 1 00:19:43.301 Atomic Write Unit (PFail): 1 00:19:43.301 Atomic Compare & Write Unit: 1 00:19:43.301 Fused Compare & Write: Supported 00:19:43.301 Scatter-Gather List 00:19:43.301 SGL Command Set: Supported 00:19:43.301 SGL Keyed: Supported 00:19:43.301 SGL Bit Bucket Descriptor: Not Supported 00:19:43.301 SGL Metadata Pointer: Not Supported 00:19:43.301 Oversized SGL: Not Supported 00:19:43.301 SGL Metadata Address: Not Supported 00:19:43.301 SGL Offset: Supported 00:19:43.301 Transport SGL Data Block: Not Supported 00:19:43.301 Replay Protected Memory Block: Not Supported 00:19:43.301 00:19:43.301 Firmware Slot Information 00:19:43.301 ========================= 00:19:43.301 Active slot: 1 00:19:43.301 Slot 1 Firmware Revision: 25.01 00:19:43.301 00:19:43.301 00:19:43.301 Commands Supported and Effects 00:19:43.301 ============================== 00:19:43.301 Admin Commands 00:19:43.301 -------------- 00:19:43.301 Get Log Page (02h): Supported 00:19:43.301 Identify (06h): Supported 00:19:43.301 Abort (08h): Supported 00:19:43.301 Set Features (09h): Supported 00:19:43.301 Get Features (0Ah): Supported 00:19:43.301 Asynchronous Event Request (0Ch): Supported 00:19:43.301 Keep Alive (18h): Supported 00:19:43.301 I/O Commands 00:19:43.301 ------------ 00:19:43.301 Flush (00h): Supported LBA-Change 00:19:43.301 Write (01h): Supported LBA-Change 00:19:43.301 Read (02h): Supported 00:19:43.301 Compare (05h): Supported 00:19:43.301 Write Zeroes (08h): Supported LBA-Change 00:19:43.301 Dataset Management (09h): Supported LBA-Change 00:19:43.301 Copy (19h): Supported LBA-Change 00:19:43.301 00:19:43.301 Error Log 00:19:43.301 ========= 00:19:43.301 00:19:43.301 Arbitration 00:19:43.301 =========== 00:19:43.301 Arbitration Burst: 1 00:19:43.301 00:19:43.301 Power Management 00:19:43.301 ================ 00:19:43.301 Number of Power States: 1 00:19:43.301 Current Power State: Power State #0 00:19:43.301 Power State #0: 00:19:43.301 Max Power: 0.00 W 00:19:43.301 Non-Operational State: Operational 00:19:43.301 Entry Latency: Not Reported 00:19:43.301 Exit Latency: Not Reported 00:19:43.301 Relative Read Throughput: 0 00:19:43.301 Relative Read Latency: 0 00:19:43.301 Relative Write Throughput: 0 00:19:43.301 Relative Write Latency: 0 00:19:43.301 Idle Power: Not Reported 00:19:43.301 Active Power: Not Reported 00:19:43.301 Non-Operational Permissive Mode: Not Supported 00:19:43.301 00:19:43.301 Health Information 00:19:43.301 ================== 00:19:43.301 Critical Warnings: 00:19:43.301 Available Spare Space: OK 00:19:43.301 Temperature: OK 00:19:43.301 Device Reliability: OK 00:19:43.301 Read Only: No 00:19:43.301 Volatile Memory Backup: OK 00:19:43.301 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:43.301 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:43.301 Available Spare: 0% 00:19:43.301 Available Spare Threshold: 0% 00:19:43.301 Life Percentage Used:[2024-11-04 14:45:52.386258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.301 [2024-11-04 14:45:52.386262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x14ef750) 00:19:43.301 [2024-11-04 14:45:52.386267] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.301 [2024-11-04 14:45:52.386278] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15541c0, cid 7, qid 0 00:19:43.301 [2024-11-04 14:45:52.386318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.301 [2024-11-04 14:45:52.386323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.301 [2024-11-04 14:45:52.386325] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.301 [2024-11-04 14:45:52.386328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15541c0) on tqpair=0x14ef750 00:19:43.301 [2024-11-04 14:45:52.386353] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:19:43.301 [2024-11-04 14:45:52.386360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553740) on tqpair=0x14ef750 00:19:43.301 [2024-11-04 14:45:52.386365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.301 [2024-11-04 14:45:52.386368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15538c0) on tqpair=0x14ef750 00:19:43.301 [2024-11-04 14:45:52.386372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.301 [2024-11-04 14:45:52.386375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553a40) on tqpair=0x14ef750 00:19:43.301 [2024-11-04 14:45:52.386379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.301 [2024-11-04 14:45:52.386382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553bc0) on tqpair=0x14ef750 00:19:43.301 [2024-11-04 14:45:52.386385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.301 [2024-11-04 14:45:52.386392] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.301 [2024-11-04 14:45:52.386395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.301 [2024-11-04 14:45:52.386397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ef750) 00:19:43.301 [2024-11-04 14:45:52.386402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.301 [2024-11-04 14:45:52.386414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553bc0, cid 3, qid 0 00:19:43.301 [2024-11-04 14:45:52.386450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.301 [2024-11-04 14:45:52.386459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.301 [2024-11-04 14:45:52.386462] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.301 [2024-11-04 14:45:52.386465] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553bc0) on tqpair=0x14ef750 00:19:43.301 [2024-11-04 14:45:52.386470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.301 [2024-11-04 14:45:52.386473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.301 [2024-11-04 14:45:52.386475] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ef750) 00:19:43.301 [2024-11-04 14:45:52.386481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.301 [2024-11-04 14:45:52.386493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553bc0, cid 3, qid 0 00:19:43.301 [2024-11-04 14:45:52.386539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.301 [2024-11-04 14:45:52.386543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.301 [2024-11-04 14:45:52.386546] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.301 [2024-11-04 14:45:52.386548] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553bc0) on tqpair=0x14ef750 00:19:43.301 [2024-11-04 14:45:52.386552] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:19:43.301 [2024-11-04 14:45:52.386555] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:19:43.301 [2024-11-04 14:45:52.386562] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.301 [2024-11-04 14:45:52.386565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.301 [2024-11-04 14:45:52.386567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ef750) 00:19:43.301 [2024-11-04 14:45:52.386572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.301 [2024-11-04 14:45:52.386582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553bc0, cid 3, qid 0 00:19:43.301 [2024-11-04 14:45:52.386627] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.301 [2024-11-04 14:45:52.386632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.301 [2024-11-04 14:45:52.386634] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.301 [2024-11-04 14:45:52.386637] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553bc0) on tqpair=0x14ef750 00:19:43.301 [2024-11-04 14:45:52.386644] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.301 [2024-11-04 14:45:52.386647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.301 [2024-11-04 14:45:52.386649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ef750) 00:19:43.301 [2024-11-04 14:45:52.386655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.301 [2024-11-04 14:45:52.386665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553bc0, cid 3, qid 0 00:19:43.301 [2024-11-04 14:45:52.386706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.301 [2024-11-04 14:45:52.386711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.301 [2024-11-04 14:45:52.386714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.301 [2024-11-04 14:45:52.386716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553bc0) on tqpair=0x14ef750 00:19:43.301 [2024-11-04 14:45:52.386724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.301 [2024-11-04 14:45:52.386726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.301 [2024-11-04 14:45:52.386729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ef750) 00:19:43.301 [2024-11-04 14:45:52.386734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.301 [2024-11-04 14:45:52.386744] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553bc0, cid 3, qid 0 00:19:43.301 [2024-11-04 14:45:52.386782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.301 [2024-11-04 14:45:52.386787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.301 [2024-11-04 14:45:52.386789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.301 [2024-11-04 14:45:52.386792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553bc0) on tqpair=0x14ef750 00:19:43.301 [2024-11-04 14:45:52.386800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.301 [2024-11-04 14:45:52.386802] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.301 [2024-11-04 14:45:52.386805] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ef750) 00:19:43.302 [2024-11-04 14:45:52.386810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.302 [2024-11-04 14:45:52.386820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553bc0, cid 3, qid 0 00:19:43.302 [2024-11-04 14:45:52.386854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.302 [2024-11-04 14:45:52.386862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.302 [2024-11-04 14:45:52.386865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.386867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553bc0) on tqpair=0x14ef750 00:19:43.302 [2024-11-04 14:45:52.386875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.386878] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.386880] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ef750) 00:19:43.302 [2024-11-04 14:45:52.386886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.302 [2024-11-04 14:45:52.386895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553bc0, cid 3, qid 0 00:19:43.302 [2024-11-04 14:45:52.386934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.302 [2024-11-04 14:45:52.386939] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.302 [2024-11-04 14:45:52.386942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.386944] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553bc0) on tqpair=0x14ef750 00:19:43.302 [2024-11-04 14:45:52.386952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.386955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.386957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ef750) 00:19:43.302 [2024-11-04 14:45:52.386962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.302 [2024-11-04 14:45:52.386972] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553bc0, cid 3, qid 0 00:19:43.302 [2024-11-04 14:45:52.387005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.302 [2024-11-04 14:45:52.387014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.302 [2024-11-04 14:45:52.387016] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553bc0) on tqpair=0x14ef750 00:19:43.302 [2024-11-04 14:45:52.387027] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387029] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ef750) 00:19:43.302 [2024-11-04 14:45:52.387037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.302 [2024-11-04 14:45:52.387047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553bc0, cid 3, qid 0 00:19:43.302 [2024-11-04 14:45:52.387086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.302 [2024-11-04 14:45:52.387090] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.302 [2024-11-04 14:45:52.387093] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553bc0) on tqpair=0x14ef750 00:19:43.302 [2024-11-04 14:45:52.387103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ef750) 00:19:43.302 [2024-11-04 14:45:52.387113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.302 [2024-11-04 14:45:52.387122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553bc0, cid 3, qid 0 00:19:43.302 [2024-11-04 14:45:52.387161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.302 [2024-11-04 14:45:52.387166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.302 [2024-11-04 14:45:52.387169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553bc0) on tqpair=0x14ef750 00:19:43.302 [2024-11-04 14:45:52.387179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387184] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ef750) 00:19:43.302 [2024-11-04 14:45:52.387189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.302 [2024-11-04 14:45:52.387199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553bc0, cid 3, qid 0 00:19:43.302 [2024-11-04 14:45:52.387235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.302 [2024-11-04 14:45:52.387240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.302 [2024-11-04 14:45:52.387242] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387245] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553bc0) on tqpair=0x14ef750 00:19:43.302 [2024-11-04 14:45:52.387252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ef750) 00:19:43.302 [2024-11-04 14:45:52.387263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.302 [2024-11-04 14:45:52.387272] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553bc0, cid 3, qid 0 00:19:43.302 [2024-11-04 14:45:52.387313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.302 [2024-11-04 14:45:52.387321] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.302 [2024-11-04 14:45:52.387323] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553bc0) on tqpair=0x14ef750 00:19:43.302 [2024-11-04 14:45:52.387334] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387339] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ef750) 00:19:43.302 [2024-11-04 14:45:52.387344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.302 [2024-11-04 14:45:52.387354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553bc0, cid 3, qid 0 00:19:43.302 [2024-11-04 14:45:52.387387] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.302 [2024-11-04 14:45:52.387396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.302 [2024-11-04 14:45:52.387398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553bc0) on tqpair=0x14ef750 00:19:43.302 [2024-11-04 14:45:52.387408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387411] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387414] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ef750) 00:19:43.302 [2024-11-04 14:45:52.387419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.302 [2024-11-04 14:45:52.387429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553bc0, cid 3, qid 0 00:19:43.302 [2024-11-04 14:45:52.387465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.302 [2024-11-04 14:45:52.387474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.302 [2024-11-04 14:45:52.387476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553bc0) on tqpair=0x14ef750 00:19:43.302 [2024-11-04 14:45:52.387486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387492] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ef750) 00:19:43.302 [2024-11-04 14:45:52.387497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.302 [2024-11-04 14:45:52.387507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553bc0, cid 3, qid 0 00:19:43.302 [2024-11-04 14:45:52.387543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.302 [2024-11-04 14:45:52.387551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.302 [2024-11-04 14:45:52.387554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553bc0) on tqpair=0x14ef750 00:19:43.302 [2024-11-04 14:45:52.387564] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.387569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ef750) 00:19:43.302 [2024-11-04 14:45:52.387574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.302 [2024-11-04 14:45:52.387585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553bc0, cid 3, qid 0 00:19:43.302 [2024-11-04 14:45:52.391618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.302 [2024-11-04 14:45:52.391631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.302 [2024-11-04 14:45:52.391634] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.391637] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553bc0) on tqpair=0x14ef750 00:19:43.302 [2024-11-04 14:45:52.391645] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.391648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.391651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ef750) 00:19:43.302 [2024-11-04 14:45:52.391657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.302 [2024-11-04 14:45:52.391671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1553bc0, cid 3, qid 0 00:19:43.302 [2024-11-04 14:45:52.391704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:43.302 [2024-11-04 14:45:52.391709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:43.302 [2024-11-04 14:45:52.391712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:43.302 [2024-11-04 14:45:52.391714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1553bc0) on tqpair=0x14ef750 00:19:43.302 [2024-11-04 14:45:52.391720] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:19:43.302 0% 00:19:43.302 Data Units Read: 0 00:19:43.302 Data Units Written: 0 00:19:43.302 Host Read Commands: 0 00:19:43.302 Host Write Commands: 0 00:19:43.303 Controller Busy Time: 0 minutes 00:19:43.303 Power Cycles: 0 00:19:43.303 Power On Hours: 0 hours 00:19:43.303 Unsafe Shutdowns: 0 00:19:43.303 Unrecoverable Media Errors: 0 00:19:43.303 Lifetime Error Log Entries: 0 00:19:43.303 Warning Temperature Time: 0 minutes 00:19:43.303 Critical Temperature Time: 0 minutes 00:19:43.303 00:19:43.303 Number of Queues 00:19:43.303 ================ 00:19:43.303 Number of I/O Submission Queues: 127 00:19:43.303 Number of I/O Completion Queues: 127 00:19:43.303 00:19:43.303 Active Namespaces 00:19:43.303 ================= 00:19:43.303 Namespace ID:1 00:19:43.303 Error Recovery Timeout: Unlimited 00:19:43.303 Command Set Identifier: NVM (00h) 00:19:43.303 Deallocate: Supported 00:19:43.303 Deallocated/Unwritten Error: Not Supported 00:19:43.303 Deallocated Read Value: Unknown 00:19:43.303 Deallocate in Write Zeroes: Not Supported 00:19:43.303 Deallocated Guard Field: 0xFFFF 00:19:43.303 Flush: Supported 00:19:43.303 Reservation: Supported 00:19:43.303 Namespace Sharing Capabilities: Multiple Controllers 00:19:43.303 Size (in LBAs): 131072 (0GiB) 00:19:43.303 Capacity (in LBAs): 131072 (0GiB) 00:19:43.303 Utilization (in LBAs): 131072 (0GiB) 00:19:43.303 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:43.303 EUI64: ABCDEF0123456789 00:19:43.303 UUID: 84267f48-75fe-4fb7-bac6-b3847b3cad2a 00:19:43.303 Thin Provisioning: Not Supported 00:19:43.303 Per-NS Atomic Units: Yes 00:19:43.303 Atomic Boundary Size (Normal): 0 00:19:43.303 Atomic Boundary Size (PFail): 0 00:19:43.303 Atomic Boundary Offset: 0 00:19:43.303 Maximum Single Source Range Length: 65535 00:19:43.303 Maximum Copy Length: 65535 00:19:43.303 Maximum Source Range Count: 1 00:19:43.303 NGUID/EUI64 Never Reused: No 00:19:43.303 Namespace Write Protected: No 00:19:43.303 Number of LBA Formats: 1 00:19:43.303 Current LBA Format: LBA Format #00 00:19:43.303 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:43.303 00:19:43.303 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:43.561 rmmod nvme_tcp 00:19:43.561 rmmod nvme_fabrics 00:19:43.561 rmmod nvme_keyring 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 72697 ']' 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 72697 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 72697 ']' 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 72697 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72697 00:19:43.561 killing process with pid 72697 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72697' 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 72697 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 72697 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:43.561 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:43.846 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:43.846 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:43.846 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:43.846 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:43.846 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:43.846 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:43.846 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:43.846 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:43.846 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:43.846 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:43.846 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:43.846 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:43.846 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.846 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.846 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.846 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:19:43.846 00:19:43.846 real 0m2.439s 00:19:43.846 user 0m6.305s 00:19:43.846 sys 0m0.570s 00:19:43.846 ************************************ 00:19:43.847 END TEST nvmf_identify 00:19:43.847 ************************************ 00:19:43.847 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:43.847 14:45:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:43.847 14:45:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:43.847 14:45:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:43.847 14:45:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:43.847 14:45:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.847 ************************************ 00:19:43.847 START TEST nvmf_perf 00:19:43.847 ************************************ 00:19:43.847 14:45:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:44.106 * Looking for test storage... 00:19:44.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:44.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.106 --rc genhtml_branch_coverage=1 00:19:44.106 --rc genhtml_function_coverage=1 00:19:44.106 --rc genhtml_legend=1 00:19:44.106 --rc geninfo_all_blocks=1 00:19:44.106 --rc geninfo_unexecuted_blocks=1 00:19:44.106 00:19:44.106 ' 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:44.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.106 --rc genhtml_branch_coverage=1 00:19:44.106 --rc genhtml_function_coverage=1 00:19:44.106 --rc genhtml_legend=1 00:19:44.106 --rc geninfo_all_blocks=1 00:19:44.106 --rc geninfo_unexecuted_blocks=1 00:19:44.106 00:19:44.106 ' 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:44.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.106 --rc genhtml_branch_coverage=1 00:19:44.106 --rc genhtml_function_coverage=1 00:19:44.106 --rc genhtml_legend=1 00:19:44.106 --rc geninfo_all_blocks=1 00:19:44.106 --rc geninfo_unexecuted_blocks=1 00:19:44.106 00:19:44.106 ' 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:44.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.106 --rc genhtml_branch_coverage=1 00:19:44.106 --rc genhtml_function_coverage=1 00:19:44.106 --rc genhtml_legend=1 00:19:44.106 --rc geninfo_all_blocks=1 00:19:44.106 --rc geninfo_unexecuted_blocks=1 00:19:44.106 00:19:44.106 ' 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.106 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:44.107 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:44.107 Cannot find device "nvmf_init_br" 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:44.107 Cannot find device "nvmf_init_br2" 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:44.107 Cannot find device "nvmf_tgt_br" 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:44.107 Cannot find device "nvmf_tgt_br2" 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:44.107 Cannot find device "nvmf_init_br" 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:44.107 Cannot find device "nvmf_init_br2" 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:44.107 Cannot find device "nvmf_tgt_br" 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:44.107 Cannot find device "nvmf_tgt_br2" 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:44.107 Cannot find device "nvmf_br" 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:44.107 Cannot find device "nvmf_init_if" 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:44.107 Cannot find device "nvmf_init_if2" 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:44.107 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:44.107 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:44.107 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:44.366 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:44.366 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:19:44.366 00:19:44.366 --- 10.0.0.3 ping statistics --- 00:19:44.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.366 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:44.366 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:44.366 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:19:44.366 00:19:44.366 --- 10.0.0.4 ping statistics --- 00:19:44.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.366 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:44.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:19:44.366 00:19:44.366 --- 10.0.0.1 ping statistics --- 00:19:44.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.366 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:44.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 00:19:44.366 00:19:44.366 --- 10.0.0.2 ping statistics --- 00:19:44.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.366 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:44.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=72956 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 72956 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 72956 ']' 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:44.366 14:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:44.366 [2024-11-04 14:45:53.442286] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:19:44.366 [2024-11-04 14:45:53.442461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.625 [2024-11-04 14:45:53.575494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:44.625 [2024-11-04 14:45:53.611914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.625 [2024-11-04 14:45:53.612096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.625 [2024-11-04 14:45:53.612162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.625 [2024-11-04 14:45:53.612190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.625 [2024-11-04 14:45:53.612206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.625 [2024-11-04 14:45:53.612957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.625 [2024-11-04 14:45:53.613061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.625 [2024-11-04 14:45:53.613590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:44.625 [2024-11-04 14:45:53.613592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.625 [2024-11-04 14:45:53.645538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:45.190 14:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:45.190 14:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:19:45.191 14:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:45.191 14:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:45.191 14:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:45.448 14:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.448 14:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:45.448 14:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:19:45.705 14:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:19:45.705 14:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:45.962 14:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:19:45.962 14:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:46.219 14:45:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:46.219 14:45:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:19:46.219 14:45:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:46.219 14:45:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:19:46.219 14:45:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:46.219 [2024-11-04 14:45:55.319967] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.219 14:45:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:46.477 14:45:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:46.477 14:45:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:46.736 14:45:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:46.736 14:45:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:46.994 14:45:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:46.994 [2024-11-04 14:45:56.120959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:47.251 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:19:47.251 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:47.251 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:47.251 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:47.251 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:48.634 Initializing NVMe Controllers 00:19:48.634 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:48.634 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:48.634 Initialization complete. Launching workers. 00:19:48.634 ======================================================== 00:19:48.634 Latency(us) 00:19:48.634 Device Information : IOPS MiB/s Average min max 00:19:48.634 PCIE (0000:00:10.0) NSID 1 from core 0: 33631.97 131.37 951.14 230.52 5856.55 00:19:48.634 ======================================================== 00:19:48.634 Total : 33631.97 131.37 951.14 230.52 5856.55 00:19:48.634 00:19:48.634 14:45:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:49.577 Initializing NVMe Controllers 00:19:49.577 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:49.577 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:49.577 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:49.577 Initialization complete. Launching workers. 00:19:49.577 ======================================================== 00:19:49.578 Latency(us) 00:19:49.578 Device Information : IOPS MiB/s Average min max 00:19:49.578 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5424.95 21.19 184.13 71.81 4160.44 00:19:49.578 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8063.54 6006.83 12012.58 00:19:49.578 ======================================================== 00:19:49.578 Total : 5549.94 21.68 361.59 71.81 12012.58 00:19:49.578 00:19:49.835 14:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:51.209 Initializing NVMe Controllers 00:19:51.209 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:51.209 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:51.209 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:51.209 Initialization complete. Launching workers. 00:19:51.209 ======================================================== 00:19:51.209 Latency(us) 00:19:51.209 Device Information : IOPS MiB/s Average min max 00:19:51.209 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9518.18 37.18 3361.29 510.95 12168.77 00:19:51.209 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3981.24 15.55 8051.70 6823.14 16410.19 00:19:51.209 ======================================================== 00:19:51.209 Total : 13499.42 52.73 4744.58 510.95 16410.19 00:19:51.209 00:19:51.209 14:46:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:19:51.209 14:46:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:53.736 Initializing NVMe Controllers 00:19:53.736 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:53.736 Controller IO queue size 128, less than required. 00:19:53.736 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:53.736 Controller IO queue size 128, less than required. 00:19:53.736 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:53.736 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:53.736 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:53.736 Initialization complete. Launching workers. 00:19:53.736 ======================================================== 00:19:53.736 Latency(us) 00:19:53.736 Device Information : IOPS MiB/s Average min max 00:19:53.736 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2300.99 575.25 56039.15 29148.93 85237.79 00:19:53.736 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 706.00 176.50 188911.10 59367.29 303365.10 00:19:53.736 ======================================================== 00:19:53.736 Total : 3006.99 751.75 87235.56 29148.93 303365.10 00:19:53.736 00:19:53.736 14:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:19:53.995 Initializing NVMe Controllers 00:19:53.995 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:53.995 Controller IO queue size 128, less than required. 00:19:53.995 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:53.995 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:19:53.995 Controller IO queue size 128, less than required. 00:19:53.995 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:53.995 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:19:53.995 WARNING: Some requested NVMe devices were skipped 00:19:53.995 No valid NVMe controllers or AIO or URING devices found 00:19:53.995 14:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:19:56.614 Initializing NVMe Controllers 00:19:56.614 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:56.614 Controller IO queue size 128, less than required. 00:19:56.614 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:56.614 Controller IO queue size 128, less than required. 00:19:56.614 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:56.614 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:56.614 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:56.614 Initialization complete. Launching workers. 00:19:56.614 00:19:56.614 ==================== 00:19:56.614 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:19:56.614 TCP transport: 00:19:56.614 polls: 13993 00:19:56.614 idle_polls: 7505 00:19:56.614 sock_completions: 6488 00:19:56.614 nvme_completions: 9109 00:19:56.614 submitted_requests: 13648 00:19:56.614 queued_requests: 1 00:19:56.614 00:19:56.614 ==================== 00:19:56.614 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:19:56.614 TCP transport: 00:19:56.614 polls: 14164 00:19:56.614 idle_polls: 7790 00:19:56.614 sock_completions: 6374 00:19:56.614 nvme_completions: 9303 00:19:56.614 submitted_requests: 13938 00:19:56.614 queued_requests: 1 00:19:56.614 ======================================================== 00:19:56.614 Latency(us) 00:19:56.614 Device Information : IOPS MiB/s Average min max 00:19:56.614 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2276.67 569.17 56945.39 28938.97 82463.54 00:19:56.614 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2325.16 581.29 55310.55 25347.40 77966.52 00:19:56.614 ======================================================== 00:19:56.614 Total : 4601.83 1150.46 56119.36 25347.40 82463.54 00:19:56.614 00:19:56.614 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:19:56.614 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:56.895 rmmod nvme_tcp 00:19:56.895 rmmod nvme_fabrics 00:19:56.895 rmmod nvme_keyring 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 72956 ']' 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 72956 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 72956 ']' 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 72956 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72956 00:19:56.895 killing process with pid 72956 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72956' 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 72956 00:19:56.895 14:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 72956 00:19:59.424 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:59.424 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:59.424 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:59.424 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:19:59.424 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:59.424 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:19:59.424 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:19:59.424 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:59.424 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:59.424 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:59.424 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:59.424 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:59.424 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:59.424 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:59.424 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:59.424 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:59.424 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:59.683 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:59.683 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:59.683 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:59.683 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:59.683 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:59.683 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:59.683 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.683 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:59.683 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.683 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:19:59.683 ************************************ 00:19:59.683 END TEST nvmf_perf 00:19:59.683 ************************************ 00:19:59.683 00:19:59.683 real 0m15.749s 00:19:59.683 user 0m54.172s 00:19:59.683 sys 0m3.328s 00:19:59.683 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:59.683 14:46:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:59.683 14:46:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:59.683 14:46:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:59.683 14:46:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:59.683 14:46:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.683 ************************************ 00:19:59.683 START TEST nvmf_fio_host 00:19:59.683 ************************************ 00:19:59.683 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:59.683 * Looking for test storage... 00:19:59.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:59.684 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:59.684 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:19:59.684 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:00.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.001 --rc genhtml_branch_coverage=1 00:20:00.001 --rc genhtml_function_coverage=1 00:20:00.001 --rc genhtml_legend=1 00:20:00.001 --rc geninfo_all_blocks=1 00:20:00.001 --rc geninfo_unexecuted_blocks=1 00:20:00.001 00:20:00.001 ' 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:00.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.001 --rc genhtml_branch_coverage=1 00:20:00.001 --rc genhtml_function_coverage=1 00:20:00.001 --rc genhtml_legend=1 00:20:00.001 --rc geninfo_all_blocks=1 00:20:00.001 --rc geninfo_unexecuted_blocks=1 00:20:00.001 00:20:00.001 ' 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:00.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.001 --rc genhtml_branch_coverage=1 00:20:00.001 --rc genhtml_function_coverage=1 00:20:00.001 --rc genhtml_legend=1 00:20:00.001 --rc geninfo_all_blocks=1 00:20:00.001 --rc geninfo_unexecuted_blocks=1 00:20:00.001 00:20:00.001 ' 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:00.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.001 --rc genhtml_branch_coverage=1 00:20:00.001 --rc genhtml_function_coverage=1 00:20:00.001 --rc genhtml_legend=1 00:20:00.001 --rc geninfo_all_blocks=1 00:20:00.001 --rc geninfo_unexecuted_blocks=1 00:20:00.001 00:20:00.001 ' 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.001 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:00.002 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:00.002 Cannot find device "nvmf_init_br" 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:00.002 Cannot find device "nvmf_init_br2" 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:00.002 Cannot find device "nvmf_tgt_br" 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:00.002 Cannot find device "nvmf_tgt_br2" 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:00.002 Cannot find device "nvmf_init_br" 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:00.002 Cannot find device "nvmf_init_br2" 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:00.002 Cannot find device "nvmf_tgt_br" 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:00.002 Cannot find device "nvmf_tgt_br2" 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:20:00.002 14:46:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:00.002 Cannot find device "nvmf_br" 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:00.002 Cannot find device "nvmf_init_if" 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:00.002 Cannot find device "nvmf_init_if2" 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:00.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:00.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:00.002 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:00.003 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:00.003 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:00.003 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:00.003 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:00.003 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:00.261 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:00.261 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:00.261 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:00.261 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:00.261 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:00.261 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:00.261 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:00.261 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:00.261 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:00.261 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:00.261 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:00.261 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:00.261 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:00.261 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:20:00.261 00:20:00.261 --- 10.0.0.3 ping statistics --- 00:20:00.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.261 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:00.261 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:00.261 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:00.261 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:20:00.261 00:20:00.261 --- 10.0.0.4 ping statistics --- 00:20:00.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.261 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:00.261 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:00.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.013 ms 00:20:00.261 00:20:00.261 --- 10.0.0.1 ping statistics --- 00:20:00.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.261 rtt min/avg/max/mdev = 0.013/0.013/0.013/0.000 ms 00:20:00.261 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:00.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.030 ms 00:20:00.261 00:20:00.261 --- 10.0.0.2 ping statistics --- 00:20:00.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.261 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:00.261 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=73406 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 73406 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 73406 ']' 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:00.262 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.262 [2024-11-04 14:46:09.252423] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:20:00.262 [2024-11-04 14:46:09.252472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.262 [2024-11-04 14:46:09.389796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:00.520 [2024-11-04 14:46:09.422226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.520 [2024-11-04 14:46:09.422269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.520 [2024-11-04 14:46:09.422274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.520 [2024-11-04 14:46:09.422278] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.520 [2024-11-04 14:46:09.422282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.520 [2024-11-04 14:46:09.422918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.520 [2024-11-04 14:46:09.423007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.520 [2024-11-04 14:46:09.423351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.520 [2024-11-04 14:46:09.423353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.520 [2024-11-04 14:46:09.454525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:01.086 14:46:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:01.086 14:46:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:20:01.086 14:46:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:01.344 [2024-11-04 14:46:10.242271] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.344 14:46:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:01.344 14:46:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:01.344 14:46:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.344 14:46:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:01.601 Malloc1 00:20:01.601 14:46:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:01.859 14:46:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:01.859 14:46:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:02.116 [2024-11-04 14:46:11.129084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:02.116 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:02.374 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:20:02.374 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:02.374 fio-3.35 00:20:02.374 Starting 1 thread 00:20:04.898 00:20:04.898 test: (groupid=0, jobs=1): err= 0: pid=73488: Mon Nov 4 14:46:13 2024 00:20:04.899 read: IOPS=11.6k, BW=45.3MiB/s (47.5MB/s)(90.8MiB/2005msec) 00:20:04.899 slat (nsec): min=1457, max=122671, avg=1755.39, stdev=1090.32 00:20:04.899 clat (usec): min=1386, max=9077, avg=5780.21, stdev=842.94 00:20:04.899 lat (usec): min=1405, max=9079, avg=5781.97, stdev=843.08 00:20:04.899 clat percentiles (usec): 00:20:04.899 | 1.00th=[ 4490], 5.00th=[ 4686], 10.00th=[ 4817], 20.00th=[ 5014], 00:20:04.899 | 30.00th=[ 5145], 40.00th=[ 5276], 50.00th=[ 5473], 60.00th=[ 6128], 00:20:04.899 | 70.00th=[ 6456], 80.00th=[ 6652], 90.00th=[ 6915], 95.00th=[ 7111], 00:20:04.899 | 99.00th=[ 7439], 99.50th=[ 7504], 99.90th=[ 7963], 99.95th=[ 8455], 00:20:04.899 | 99.99th=[ 8848] 00:20:04.899 bw ( KiB/s): min=39680, max=53264, per=99.94%, avg=46358.00, stdev=7128.89, samples=4 00:20:04.899 iops : min= 9920, max=13316, avg=11589.50, stdev=1782.22, samples=4 00:20:04.899 write: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(90.2MiB/2005msec); 0 zone resets 00:20:04.899 slat (nsec): min=1493, max=159859, avg=1832.93, stdev=1294.06 00:20:04.899 clat (usec): min=989, max=8595, avg=5242.92, stdev=766.11 00:20:04.899 lat (usec): min=995, max=8596, avg=5244.75, stdev=766.28 00:20:04.899 clat percentiles (usec): 00:20:04.899 | 1.00th=[ 4047], 5.00th=[ 4293], 10.00th=[ 4359], 20.00th=[ 4555], 00:20:04.899 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4948], 60.00th=[ 5604], 00:20:04.899 | 70.00th=[ 5866], 80.00th=[ 6063], 90.00th=[ 6259], 95.00th=[ 6390], 00:20:04.899 | 99.00th=[ 6718], 99.50th=[ 6849], 99.90th=[ 7308], 99.95th=[ 8094], 00:20:04.899 | 99.99th=[ 8586] 00:20:04.899 bw ( KiB/s): min=40160, max=51968, per=100.00%, avg=46066.00, stdev=6778.21, samples=4 00:20:04.899 iops : min=10040, max=12992, avg=11516.50, stdev=1694.55, samples=4 00:20:04.899 lat (usec) : 1000=0.01% 00:20:04.899 lat (msec) : 2=0.04%, 4=0.35%, 10=99.61% 00:20:04.899 cpu : usr=79.99%, sys=15.97%, ctx=14, majf=0, minf=7 00:20:04.899 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:04.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:04.899 issued rwts: total=23251,23088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:04.899 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:04.899 00:20:04.899 Run status group 0 (all jobs): 00:20:04.899 READ: bw=45.3MiB/s (47.5MB/s), 45.3MiB/s-45.3MiB/s (47.5MB/s-47.5MB/s), io=90.8MiB (95.2MB), run=2005-2005msec 00:20:04.899 WRITE: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=90.2MiB (94.6MB), run=2005-2005msec 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:04.899 14:46:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:20:04.899 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:04.899 fio-3.35 00:20:04.899 Starting 1 thread 00:20:07.432 00:20:07.432 test: (groupid=0, jobs=1): err= 0: pid=73532: Mon Nov 4 14:46:16 2024 00:20:07.432 read: IOPS=10.7k, BW=168MiB/s (176MB/s)(336MiB/2002msec) 00:20:07.432 slat (usec): min=3, max=113, avg= 3.35, stdev= 1.59 00:20:07.432 clat (usec): min=2556, max=12769, avg=6428.22, stdev=1960.26 00:20:07.432 lat (usec): min=2559, max=12772, avg=6431.57, stdev=1960.35 00:20:07.432 clat percentiles (usec): 00:20:07.432 | 1.00th=[ 3261], 5.00th=[ 3785], 10.00th=[ 4178], 20.00th=[ 4621], 00:20:07.432 | 30.00th=[ 5080], 40.00th=[ 5604], 50.00th=[ 6128], 60.00th=[ 6652], 00:20:07.432 | 70.00th=[ 7308], 80.00th=[ 8225], 90.00th=[ 9372], 95.00th=[ 9896], 00:20:07.432 | 99.00th=[11207], 99.50th=[11731], 99.90th=[12649], 99.95th=[12649], 00:20:07.432 | 99.99th=[12780] 00:20:07.432 bw ( KiB/s): min=80640, max=92710, per=50.11%, avg=86025.50, stdev=5001.80, samples=4 00:20:07.432 iops : min= 5040, max= 5794, avg=5376.50, stdev=312.45, samples=4 00:20:07.432 write: IOPS=6165, BW=96.3MiB/s (101MB/s)(176MiB/1826msec); 0 zone resets 00:20:07.432 slat (usec): min=36, max=445, avg=37.84, stdev= 7.17 00:20:07.432 clat (usec): min=2108, max=15233, avg=9670.37, stdev=1313.05 00:20:07.432 lat (usec): min=2145, max=15270, avg=9708.22, stdev=1313.01 00:20:07.432 clat percentiles (usec): 00:20:07.432 | 1.00th=[ 6456], 5.00th=[ 7635], 10.00th=[ 8029], 20.00th=[ 8586], 00:20:07.432 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10028], 00:20:07.432 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11338], 95.00th=[11731], 00:20:07.432 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13173], 99.95th=[13173], 00:20:07.432 | 99.99th=[13304] 00:20:07.432 bw ( KiB/s): min=85056, max=96351, per=90.55%, avg=89327.75, stdev=4893.92, samples=4 00:20:07.432 iops : min= 5316, max= 6021, avg=5582.75, stdev=305.42, samples=4 00:20:07.432 lat (msec) : 4=4.90%, 10=78.07%, 20=17.04% 00:20:07.432 cpu : usr=86.51%, sys=9.39%, ctx=2, majf=0, minf=3 00:20:07.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:20:07.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:07.432 issued rwts: total=21482,11258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:07.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:07.432 00:20:07.432 Run status group 0 (all jobs): 00:20:07.432 READ: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=336MiB (352MB), run=2002-2002msec 00:20:07.432 WRITE: bw=96.3MiB/s (101MB/s), 96.3MiB/s-96.3MiB/s (101MB/s-101MB/s), io=176MiB (184MB), run=1826-1826msec 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:07.432 rmmod nvme_tcp 00:20:07.432 rmmod nvme_fabrics 00:20:07.432 rmmod nvme_keyring 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 73406 ']' 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 73406 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 73406 ']' 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 73406 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73406 00:20:07.432 killing process with pid 73406 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73406' 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 73406 00:20:07.432 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 73406 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:07.691 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:07.949 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:07.949 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:07.949 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.949 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.949 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.949 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:20:07.949 ************************************ 00:20:07.949 END TEST nvmf_fio_host 00:20:07.949 ************************************ 00:20:07.949 00:20:07.949 real 0m8.157s 00:20:07.949 user 0m33.354s 00:20:07.949 sys 0m1.817s 00:20:07.949 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:07.949 14:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.949 14:46:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:07.949 14:46:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:07.949 14:46:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:07.949 14:46:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.949 ************************************ 00:20:07.949 START TEST nvmf_failover 00:20:07.949 ************************************ 00:20:07.949 14:46:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:07.949 * Looking for test storage... 00:20:07.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:07.950 14:46:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:07.950 14:46:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:20:07.950 14:46:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:07.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.950 --rc genhtml_branch_coverage=1 00:20:07.950 --rc genhtml_function_coverage=1 00:20:07.950 --rc genhtml_legend=1 00:20:07.950 --rc geninfo_all_blocks=1 00:20:07.950 --rc geninfo_unexecuted_blocks=1 00:20:07.950 00:20:07.950 ' 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:07.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.950 --rc genhtml_branch_coverage=1 00:20:07.950 --rc genhtml_function_coverage=1 00:20:07.950 --rc genhtml_legend=1 00:20:07.950 --rc geninfo_all_blocks=1 00:20:07.950 --rc geninfo_unexecuted_blocks=1 00:20:07.950 00:20:07.950 ' 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:07.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.950 --rc genhtml_branch_coverage=1 00:20:07.950 --rc genhtml_function_coverage=1 00:20:07.950 --rc genhtml_legend=1 00:20:07.950 --rc geninfo_all_blocks=1 00:20:07.950 --rc geninfo_unexecuted_blocks=1 00:20:07.950 00:20:07.950 ' 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:07.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.950 --rc genhtml_branch_coverage=1 00:20:07.950 --rc genhtml_function_coverage=1 00:20:07.950 --rc genhtml_legend=1 00:20:07.950 --rc geninfo_all_blocks=1 00:20:07.950 --rc geninfo_unexecuted_blocks=1 00:20:07.950 00:20:07.950 ' 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:07.950 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:07.950 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:08.209 Cannot find device "nvmf_init_br" 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:08.209 Cannot find device "nvmf_init_br2" 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:08.209 Cannot find device "nvmf_tgt_br" 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:08.209 Cannot find device "nvmf_tgt_br2" 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:08.209 Cannot find device "nvmf_init_br" 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:08.209 Cannot find device "nvmf_init_br2" 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:08.209 Cannot find device "nvmf_tgt_br" 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:08.209 Cannot find device "nvmf_tgt_br2" 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:08.209 Cannot find device "nvmf_br" 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:08.209 Cannot find device "nvmf_init_if" 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:08.209 Cannot find device "nvmf_init_if2" 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:08.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:08.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:08.209 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:08.467 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:08.467 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:08.467 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:08.467 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:08.467 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:08.467 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:08.467 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:08.467 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:08.467 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:08.467 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:20:08.467 00:20:08.467 --- 10.0.0.3 ping statistics --- 00:20:08.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.467 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:08.467 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:08.467 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:08.467 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:20:08.467 00:20:08.467 --- 10.0.0.4 ping statistics --- 00:20:08.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.467 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:08.467 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:08.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:20:08.467 00:20:08.467 --- 10.0.0.1 ping statistics --- 00:20:08.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.468 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:08.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:20:08.468 00:20:08.468 --- 10.0.0.2 ping statistics --- 00:20:08.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.468 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:08.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=73794 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 73794 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 73794 ']' 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:08.468 14:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:08.468 [2024-11-04 14:46:17.445627] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:20:08.468 [2024-11-04 14:46:17.445680] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.468 [2024-11-04 14:46:17.581115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:08.726 [2024-11-04 14:46:17.618637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.726 [2024-11-04 14:46:17.618816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.726 [2024-11-04 14:46:17.618898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.726 [2024-11-04 14:46:17.619004] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.726 [2024-11-04 14:46:17.619065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.726 [2024-11-04 14:46:17.619975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.726 [2024-11-04 14:46:17.620054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:08.726 [2024-11-04 14:46:17.620056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.726 [2024-11-04 14:46:17.655221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:09.292 14:46:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:09.292 14:46:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:20:09.292 14:46:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:09.292 14:46:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:09.292 14:46:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:09.292 14:46:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.292 14:46:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:09.557 [2024-11-04 14:46:18.522011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.557 14:46:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:09.815 Malloc0 00:20:09.815 14:46:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:10.073 14:46:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:10.073 14:46:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:10.330 [2024-11-04 14:46:19.364325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:10.330 14:46:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:10.606 [2024-11-04 14:46:19.576479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:10.606 14:46:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:20:10.888 [2024-11-04 14:46:19.776630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:20:10.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.888 14:46:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=73852 00:20:10.888 14:46:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:10.888 14:46:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:10.888 14:46:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 73852 /var/tmp/bdevperf.sock 00:20:10.888 14:46:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 73852 ']' 00:20:10.888 14:46:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.888 14:46:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:10.888 14:46:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.888 14:46:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:10.888 14:46:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:11.821 14:46:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:11.821 14:46:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:20:11.821 14:46:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:11.821 NVMe0n1 00:20:11.821 14:46:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:12.078 00:20:12.336 14:46:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=73870 00:20:12.336 14:46:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:12.336 14:46:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:13.276 14:46:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:13.276 14:46:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:16.600 14:46:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:16.600 00:20:16.600 14:46:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:16.857 14:46:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:20.135 14:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:20.135 [2024-11-04 14:46:29.076980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:20.135 14:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:21.066 14:46:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:20:21.324 14:46:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 73870 00:20:27.892 { 00:20:27.892 "results": [ 00:20:27.892 { 00:20:27.892 "job": "NVMe0n1", 00:20:27.892 "core_mask": "0x1", 00:20:27.892 "workload": "verify", 00:20:27.892 "status": "finished", 00:20:27.892 "verify_range": { 00:20:27.892 "start": 0, 00:20:27.892 "length": 16384 00:20:27.892 }, 00:20:27.892 "queue_depth": 128, 00:20:27.892 "io_size": 4096, 00:20:27.892 "runtime": 15.007937, 00:20:27.892 "iops": 12034.298917965874, 00:20:27.892 "mibps": 47.0089801483042, 00:20:27.892 "io_failed": 4140, 00:20:27.892 "io_timeout": 0, 00:20:27.892 "avg_latency_us": 10373.66091758093, 00:20:27.892 "min_latency_us": 441.10769230769233, 00:20:27.892 "max_latency_us": 15526.99076923077 00:20:27.892 } 00:20:27.892 ], 00:20:27.892 "core_count": 1 00:20:27.892 } 00:20:27.892 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 73852 00:20:27.892 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 73852 ']' 00:20:27.892 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 73852 00:20:27.892 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:20:27.892 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:27.892 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73852 00:20:27.892 killing process with pid 73852 00:20:27.892 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:27.892 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:27.892 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73852' 00:20:27.892 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 73852 00:20:27.892 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 73852 00:20:27.892 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:27.892 [2024-11-04 14:46:19.832412] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:20:27.892 [2024-11-04 14:46:19.832490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73852 ] 00:20:27.892 [2024-11-04 14:46:19.969812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.892 [2024-11-04 14:46:20.006536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.892 [2024-11-04 14:46:20.038038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:27.892 Running I/O for 15 seconds... 00:20:27.892 12985.00 IOPS, 50.72 MiB/s [2024-11-04T14:46:37.032Z] [2024-11-04 14:46:22.381075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.892 [2024-11-04 14:46:22.381141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.892 [2024-11-04 14:46:22.381171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.892 [2024-11-04 14:46:22.381191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.892 [2024-11-04 14:46:22.381211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.892 [2024-11-04 14:46:22.381230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.892 [2024-11-04 14:46:22.381250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.892 [2024-11-04 14:46:22.381269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.892 [2024-11-04 14:46:22.381288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.892 [2024-11-04 14:46:22.381308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.892 [2024-11-04 14:46:22.381328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.892 [2024-11-04 14:46:22.381378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.892 [2024-11-04 14:46:22.381399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.892 [2024-11-04 14:46:22.381419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.892 [2024-11-04 14:46:22.381439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.892 [2024-11-04 14:46:22.381458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.892 [2024-11-04 14:46:22.381481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.892 [2024-11-04 14:46:22.381500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.892 [2024-11-04 14:46:22.381520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.892 [2024-11-04 14:46:22.381541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.892 [2024-11-04 14:46:22.381560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.892 [2024-11-04 14:46:22.381580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.892 [2024-11-04 14:46:22.381590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.381599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.381619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.381628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.381645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.381654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.381665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.893 [2024-11-04 14:46:22.381674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.381684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.893 [2024-11-04 14:46:22.381692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.381703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.893 [2024-11-04 14:46:22.381712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.381722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.893 [2024-11-04 14:46:22.381731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.381742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.893 [2024-11-04 14:46:22.381750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.381761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.893 [2024-11-04 14:46:22.381770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.381781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.893 [2024-11-04 14:46:22.381789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.381800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.893 [2024-11-04 14:46:22.381810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.381821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.381830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.381842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.381850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.381861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.381870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.381881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.381894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.381905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.381913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.381924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.381933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.381943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.381952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.381963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.381972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.381983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.381991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.382012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.382032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.382052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.382071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.382091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.382110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.382131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.382159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.382179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.382199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.382218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.382237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.382257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.382276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.893 [2024-11-04 14:46:22.382296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.893 [2024-11-04 14:46:22.382315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.893 [2024-11-04 14:46:22.382334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.893 [2024-11-04 14:46:22.382354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.893 [2024-11-04 14:46:22.382373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.893 [2024-11-04 14:46:22.382395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.893 [2024-11-04 14:46:22.382406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.894 [2024-11-04 14:46:22.382415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.894 [2024-11-04 14:46:22.382435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.894 [2024-11-04 14:46:22.382456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.382476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.382495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.382515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.382535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.382555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.382574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.382593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.382621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.382640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.382664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.382684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.382703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.382723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.382742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.382762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.382783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.894 [2024-11-04 14:46:22.382802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.894 [2024-11-04 14:46:22.382826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.894 [2024-11-04 14:46:22.382845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.894 [2024-11-04 14:46:22.382865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.894 [2024-11-04 14:46:22.382884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.894 [2024-11-04 14:46:22.382904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.894 [2024-11-04 14:46:22.382927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.894 [2024-11-04 14:46:22.382946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.894 [2024-11-04 14:46:22.382965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.894 [2024-11-04 14:46:22.382985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.382995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.894 [2024-11-04 14:46:22.383004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.383014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.894 [2024-11-04 14:46:22.383023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.383034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.894 [2024-11-04 14:46:22.383042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.383053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.894 [2024-11-04 14:46:22.383061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.383072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.894 [2024-11-04 14:46:22.383081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.383091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.894 [2024-11-04 14:46:22.383102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.383112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.383121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.383133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.383142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.383153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.383165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.383176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.383184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.894 [2024-11-04 14:46:22.383196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.894 [2024-11-04 14:46:22.383205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.895 [2024-11-04 14:46:22.383224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.895 [2024-11-04 14:46:22.383244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.895 [2024-11-04 14:46:22.383263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.895 [2024-11-04 14:46:22.383282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.895 [2024-11-04 14:46:22.383301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.895 [2024-11-04 14:46:22.383320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.895 [2024-11-04 14:46:22.383340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aa060 is same with the state(6) to be set 00:20:27.895 [2024-11-04 14:46:22.383362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.895 [2024-11-04 14:46:22.383369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.895 [2024-11-04 14:46:22.383376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111848 len:8 PRP1 0x0 PRP2 0x0 00:20:27.895 [2024-11-04 14:46:22.383384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.895 [2024-11-04 14:46:22.383400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.895 [2024-11-04 14:46:22.383412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111856 len:8 PRP1 0x0 PRP2 0x0 00:20:27.895 [2024-11-04 14:46:22.383421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.895 [2024-11-04 14:46:22.383437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.895 [2024-11-04 14:46:22.383445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111864 len:8 PRP1 0x0 PRP2 0x0 00:20:27.895 [2024-11-04 14:46:22.383453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.895 [2024-11-04 14:46:22.383468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.895 [2024-11-04 14:46:22.383475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111872 len:8 PRP1 0x0 PRP2 0x0 00:20:27.895 [2024-11-04 14:46:22.383484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.895 [2024-11-04 14:46:22.383499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.895 [2024-11-04 14:46:22.383506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112200 len:8 PRP1 0x0 PRP2 0x0 00:20:27.895 [2024-11-04 14:46:22.383514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.895 [2024-11-04 14:46:22.383529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.895 [2024-11-04 14:46:22.383536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112208 len:8 PRP1 0x0 PRP2 0x0 00:20:27.895 [2024-11-04 14:46:22.383544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.895 [2024-11-04 14:46:22.383559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.895 [2024-11-04 14:46:22.383565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112216 len:8 PRP1 0x0 PRP2 0x0 00:20:27.895 [2024-11-04 14:46:22.383574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.895 [2024-11-04 14:46:22.383589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.895 [2024-11-04 14:46:22.383595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112224 len:8 PRP1 0x0 PRP2 0x0 00:20:27.895 [2024-11-04 14:46:22.383604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.895 [2024-11-04 14:46:22.383626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.895 [2024-11-04 14:46:22.383632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112232 len:8 PRP1 0x0 PRP2 0x0 00:20:27.895 [2024-11-04 14:46:22.383641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.895 [2024-11-04 14:46:22.383659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.895 [2024-11-04 14:46:22.383667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112240 len:8 PRP1 0x0 PRP2 0x0 00:20:27.895 [2024-11-04 14:46:22.383676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.895 [2024-11-04 14:46:22.383694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.895 [2024-11-04 14:46:22.383700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112248 len:8 PRP1 0x0 PRP2 0x0 00:20:27.895 [2024-11-04 14:46:22.383709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.895 [2024-11-04 14:46:22.383724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.895 [2024-11-04 14:46:22.383730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112256 len:8 PRP1 0x0 PRP2 0x0 00:20:27.895 [2024-11-04 14:46:22.383739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.895 [2024-11-04 14:46:22.383754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.895 [2024-11-04 14:46:22.383761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112264 len:8 PRP1 0x0 PRP2 0x0 00:20:27.895 [2024-11-04 14:46:22.383769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.895 [2024-11-04 14:46:22.383784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.895 [2024-11-04 14:46:22.383790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112272 len:8 PRP1 0x0 PRP2 0x0 00:20:27.895 [2024-11-04 14:46:22.383799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.895 [2024-11-04 14:46:22.383814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.895 [2024-11-04 14:46:22.383820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112280 len:8 PRP1 0x0 PRP2 0x0 00:20:27.895 [2024-11-04 14:46:22.383828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.895 [2024-11-04 14:46:22.383843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.895 [2024-11-04 14:46:22.383850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112288 len:8 PRP1 0x0 PRP2 0x0 00:20:27.895 [2024-11-04 14:46:22.383858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.895 [2024-11-04 14:46:22.383873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.895 [2024-11-04 14:46:22.383880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112296 len:8 PRP1 0x0 PRP2 0x0 00:20:27.895 [2024-11-04 14:46:22.383888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.895 [2024-11-04 14:46:22.383901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.895 [2024-11-04 14:46:22.383907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.895 [2024-11-04 14:46:22.383915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112304 len:8 PRP1 0x0 PRP2 0x0 00:20:27.896 [2024-11-04 14:46:22.383923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:22.383933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.896 [2024-11-04 14:46:22.383939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.896 [2024-11-04 14:46:22.383946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112312 len:8 PRP1 0x0 PRP2 0x0 00:20:27.896 [2024-11-04 14:46:22.383954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:22.383964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.896 [2024-11-04 14:46:22.383970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.896 [2024-11-04 14:46:22.383976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112320 len:8 PRP1 0x0 PRP2 0x0 00:20:27.896 [2024-11-04 14:46:22.383984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:22.384024] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:20:27.896 [2024-11-04 14:46:22.384070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.896 [2024-11-04 14:46:22.384081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:22.384091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.896 [2024-11-04 14:46:22.384100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:22.384109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.896 [2024-11-04 14:46:22.384118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:22.384128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.896 [2024-11-04 14:46:22.384137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:22.384147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:27.896 [2024-11-04 14:46:22.387467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:27.896 [2024-11-04 14:46:22.387505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130d710 (9): Bad file descriptor 00:20:27.896 [2024-11-04 14:46:22.411845] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:27.896 12689.50 IOPS, 49.57 MiB/s [2024-11-04T14:46:37.036Z] 12775.00 IOPS, 49.90 MiB/s [2024-11-04T14:46:37.036Z] 12813.25 IOPS, 50.05 MiB/s [2024-11-04T14:46:37.036Z] [2024-11-04 14:46:25.866386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.896 [2024-11-04 14:46:25.866826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.896 [2024-11-04 14:46:25.866835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.866842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.866851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.866858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.866867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.897 [2024-11-04 14:46:25.866875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.866883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.897 [2024-11-04 14:46:25.866891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.866899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.897 [2024-11-04 14:46:25.866909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.866918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.897 [2024-11-04 14:46:25.866925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.866934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.897 [2024-11-04 14:46:25.866941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.866949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.897 [2024-11-04 14:46:25.866956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.866965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.897 [2024-11-04 14:46:25.866972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.866980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.897 [2024-11-04 14:46:25.866987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.866996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.897 [2024-11-04 14:46:25.867262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.897 [2024-11-04 14:46:25.867278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.897 [2024-11-04 14:46:25.867294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.897 [2024-11-04 14:46:25.867313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.897 [2024-11-04 14:46:25.867328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.897 [2024-11-04 14:46:25.867344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.897 [2024-11-04 14:46:25.867360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.897 [2024-11-04 14:46:25.867375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.897 [2024-11-04 14:46:25.867470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.897 [2024-11-04 14:46:25.867478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.898 [2024-11-04 14:46:25.867652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.898 [2024-11-04 14:46:25.867668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.898 [2024-11-04 14:46:25.867683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.898 [2024-11-04 14:46:25.867699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.898 [2024-11-04 14:46:25.867715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.898 [2024-11-04 14:46:25.867734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.898 [2024-11-04 14:46:25.867750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.898 [2024-11-04 14:46:25.867766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.867988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.867997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.868004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.868013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.898 [2024-11-04 14:46:25.868025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.868033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.898 [2024-11-04 14:46:25.868041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.868050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.898 [2024-11-04 14:46:25.868057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.868066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.898 [2024-11-04 14:46:25.868074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.868082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.898 [2024-11-04 14:46:25.868089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.868098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.898 [2024-11-04 14:46:25.868105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.898 [2024-11-04 14:46:25.868114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.898 [2024-11-04 14:46:25.868121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.899 [2024-11-04 14:46:25.868139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.899 [2024-11-04 14:46:25.868155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.899 [2024-11-04 14:46:25.868171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.899 [2024-11-04 14:46:25.868187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.899 [2024-11-04 14:46:25.868204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.899 [2024-11-04 14:46:25.868220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.899 [2024-11-04 14:46:25.868235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.899 [2024-11-04 14:46:25.868251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.899 [2024-11-04 14:46:25.868266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.899 [2024-11-04 14:46:25.868284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.899 [2024-11-04 14:46:25.868300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.899 [2024-11-04 14:46:25.868316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.899 [2024-11-04 14:46:25.868332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.899 [2024-11-04 14:46:25.868350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.899 [2024-11-04 14:46:25.868367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.899 [2024-11-04 14:46:25.868382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.899 [2024-11-04 14:46:25.868398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aaa80 is same with the state(6) to be set 00:20:27.899 [2024-11-04 14:46:25.868415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.899 [2024-11-04 14:46:25.868421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.899 [2024-11-04 14:46:25.868426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83344 len:8 PRP1 0x0 PRP2 0x0 00:20:27.899 [2024-11-04 14:46:25.868434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.899 [2024-11-04 14:46:25.868447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.899 [2024-11-04 14:46:25.868452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83992 len:8 PRP1 0x0 PRP2 0x0 00:20:27.899 [2024-11-04 14:46:25.868459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.899 [2024-11-04 14:46:25.868471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.899 [2024-11-04 14:46:25.868477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84000 len:8 PRP1 0x0 PRP2 0x0 00:20:27.899 [2024-11-04 14:46:25.868483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.899 [2024-11-04 14:46:25.868495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.899 [2024-11-04 14:46:25.868501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84008 len:8 PRP1 0x0 PRP2 0x0 00:20:27.899 [2024-11-04 14:46:25.868507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.899 [2024-11-04 14:46:25.868525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.899 [2024-11-04 14:46:25.868530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84016 len:8 PRP1 0x0 PRP2 0x0 00:20:27.899 [2024-11-04 14:46:25.868540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.899 [2024-11-04 14:46:25.868553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.899 [2024-11-04 14:46:25.868559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84024 len:8 PRP1 0x0 PRP2 0x0 00:20:27.899 [2024-11-04 14:46:25.868565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.899 [2024-11-04 14:46:25.868578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.899 [2024-11-04 14:46:25.868583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84032 len:8 PRP1 0x0 PRP2 0x0 00:20:27.899 [2024-11-04 14:46:25.868590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.899 [2024-11-04 14:46:25.868602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.899 [2024-11-04 14:46:25.868614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84040 len:8 PRP1 0x0 PRP2 0x0 00:20:27.899 [2024-11-04 14:46:25.868621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.899 [2024-11-04 14:46:25.868634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.899 [2024-11-04 14:46:25.868639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84048 len:8 PRP1 0x0 PRP2 0x0 00:20:27.899 [2024-11-04 14:46:25.868646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868680] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:20:27.899 [2024-11-04 14:46:25.868716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.899 [2024-11-04 14:46:25.868725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.899 [2024-11-04 14:46:25.868741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.899 [2024-11-04 14:46:25.868756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.899 [2024-11-04 14:46:25.868770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.899 [2024-11-04 14:46:25.868778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:27.899 [2024-11-04 14:46:25.868800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130d710 (9): Bad file descriptor 00:20:27.899 [2024-11-04 14:46:25.871440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:27.899 [2024-11-04 14:46:25.898690] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:20:27.899 12728.60 IOPS, 49.72 MiB/s [2024-11-04T14:46:37.039Z] 12712.50 IOPS, 49.66 MiB/s [2024-11-04T14:46:37.039Z] 12688.29 IOPS, 49.56 MiB/s [2024-11-04T14:46:37.039Z] 12672.12 IOPS, 49.50 MiB/s [2024-11-04T14:46:37.039Z] [2024-11-04 14:46:30.282718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.282778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.282796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.282807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.282818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.282828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.282839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.282847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.282858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.282867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.282878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.282886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.282897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.282906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.282916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.282925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.282937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.900 [2024-11-04 14:46:30.282945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.282956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.900 [2024-11-04 14:46:30.282965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.282976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.900 [2024-11-04 14:46:30.282984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.282995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.900 [2024-11-04 14:46:30.283004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.900 [2024-11-04 14:46:30.283047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.900 [2024-11-04 14:46:30.283066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.900 [2024-11-04 14:46:30.283085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.900 [2024-11-04 14:46:30.283105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.900 [2024-11-04 14:46:30.283123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.900 [2024-11-04 14:46:30.283145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.900 [2024-11-04 14:46:30.283165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.900 [2024-11-04 14:46:30.283184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.900 [2024-11-04 14:46:30.283203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.900 [2024-11-04 14:46:30.283223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.900 [2024-11-04 14:46:30.283242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.900 [2024-11-04 14:46:30.283261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.283286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.283306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.283325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.283345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.283364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.283384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.283403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.283423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.283443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.283462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.283482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.283502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.283522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.283545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.283565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.900 [2024-11-04 14:46:30.283576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.900 [2024-11-04 14:46:30.283584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.283615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.283634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.283654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.283673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.283692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.283712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.283731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.283750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.901 [2024-11-04 14:46:30.283770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.901 [2024-11-04 14:46:30.283793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.901 [2024-11-04 14:46:30.283813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.901 [2024-11-04 14:46:30.283833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.901 [2024-11-04 14:46:30.283852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.901 [2024-11-04 14:46:30.283872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.901 [2024-11-04 14:46:30.283891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.901 [2024-11-04 14:46:30.283910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.283930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.283949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.283968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.283988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.283998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.284007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.284017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.284026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.284036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.284049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.284060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.284070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.284080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.284090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.284100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.284109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.284120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.284128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.284139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.284148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.284159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.284167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.284178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.284186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.284197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.284206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.284217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.901 [2024-11-04 14:46:30.284226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.284236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.901 [2024-11-04 14:46:30.284245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.284255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.901 [2024-11-04 14:46:30.284263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.901 [2024-11-04 14:46:30.284274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.901 [2024-11-04 14:46:30.284283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.902 [2024-11-04 14:46:30.284728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.902 [2024-11-04 14:46:30.284748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.902 [2024-11-04 14:46:30.284767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.902 [2024-11-04 14:46:30.284787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.902 [2024-11-04 14:46:30.284811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.902 [2024-11-04 14:46:30.284830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.902 [2024-11-04 14:46:30.284849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.902 [2024-11-04 14:46:30.284869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.284984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.284995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.285003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.285014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.285025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.285036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.285046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.285060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.285069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.902 [2024-11-04 14:46:30.285079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.902 [2024-11-04 14:46:30.285089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.903 [2024-11-04 14:46:30.285100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.903 [2024-11-04 14:46:30.285109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.903 [2024-11-04 14:46:30.285127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.903 [2024-11-04 14:46:30.285136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.903 [2024-11-04 14:46:30.285147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.903 [2024-11-04 14:46:30.285156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.903 [2024-11-04 14:46:30.285166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.903 [2024-11-04 14:46:30.285175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.903 [2024-11-04 14:46:30.285185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.903 [2024-11-04 14:46:30.285194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.903 [2024-11-04 14:46:30.285205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.903 [2024-11-04 14:46:30.285213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.903 [2024-11-04 14:46:30.285224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.903 [2024-11-04 14:46:30.285233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.903 [2024-11-04 14:46:30.285244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.903 [2024-11-04 14:46:30.285252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.903 [2024-11-04 14:46:30.285263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.903 [2024-11-04 14:46:30.285271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.903 [2024-11-04 14:46:30.285282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.903 [2024-11-04 14:46:30.285291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.903 [2024-11-04 14:46:30.285302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.903 [2024-11-04 14:46:30.285315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.903 [2024-11-04 14:46:30.285326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.903 [2024-11-04 14:46:30.285335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.903 [2024-11-04 14:46:30.285372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.903 [2024-11-04 14:46:30.285381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.903 [2024-11-04 14:46:30.285388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13488 len:8 PRP1 0x0 PRP2 0x0 00:20:27.903 [2024-11-04 14:46:30.285398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.903 [2024-11-04 14:46:30.285437] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:20:27.903 [2024-11-04 14:46:30.285474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.903 [2024-11-04 14:46:30.285485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.903 [2024-11-04 14:46:30.285495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.903 [2024-11-04 14:46:30.285504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.903 [2024-11-04 14:46:30.285514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.903 [2024-11-04 14:46:30.285523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.903 [2024-11-04 14:46:30.285532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.903 [2024-11-04 14:46:30.285541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.903 [2024-11-04 14:46:30.285551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:27.903 [2024-11-04 14:46:30.288814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:27.903 [2024-11-04 14:46:30.288846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130d710 (9): Bad file descriptor 00:20:27.903 12614.33 IOPS, 49.27 MiB/s [2024-11-04T14:46:37.043Z] [2024-11-04 14:46:30.318786] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:20:27.903 12662.90 IOPS, 49.46 MiB/s [2024-11-04T14:46:37.043Z] 12718.73 IOPS, 49.68 MiB/s [2024-11-04T14:46:37.043Z] 12509.08 IOPS, 48.86 MiB/s [2024-11-04T14:46:37.043Z] 12331.62 IOPS, 48.17 MiB/s [2024-11-04T14:46:37.043Z] 12178.43 IOPS, 47.57 MiB/s [2024-11-04T14:46:37.043Z] 12033.20 IOPS, 47.00 MiB/s 00:20:27.903 Latency(us) 00:20:27.903 [2024-11-04T14:46:37.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.903 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:27.903 Verification LBA range: start 0x0 length 0x4000 00:20:27.903 NVMe0n1 : 15.01 12034.30 47.01 275.85 0.00 10373.66 441.11 15526.99 00:20:27.903 [2024-11-04T14:46:37.043Z] =================================================================================================================== 00:20:27.903 [2024-11-04T14:46:37.043Z] Total : 12034.30 47.01 275.85 0.00 10373.66 441.11 15526.99 00:20:27.903 Received shutdown signal, test time was about 15.000000 seconds 00:20:27.903 00:20:27.903 Latency(us) 00:20:27.903 [2024-11-04T14:46:37.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.903 [2024-11-04T14:46:37.043Z] =================================================================================================================== 00:20:27.903 [2024-11-04T14:46:37.043Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.903 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:27.903 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:20:27.903 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:20:27.903 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=74049 00:20:27.903 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 74049 /var/tmp/bdevperf.sock 00:20:27.903 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 74049 ']' 00:20:27.903 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.903 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:27.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.903 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.903 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:27.903 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:27.903 14:46:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:28.163 14:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:28.163 14:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:20:28.163 14:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:28.421 [2024-11-04 14:46:37.487094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:28.421 14:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:20:28.677 [2024-11-04 14:46:37.651185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:20:28.677 14:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:28.937 NVMe0n1 00:20:28.937 14:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:29.204 00:20:29.204 14:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:29.204 00:20:29.204 14:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:29.204 14:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:20:29.462 14:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:29.719 14:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:20:33.017 14:46:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:33.017 14:46:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:20:33.017 14:46:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:33.017 14:46:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=74120 00:20:33.017 14:46:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 74120 00:20:33.949 { 00:20:33.949 "results": [ 00:20:33.949 { 00:20:33.949 "job": "NVMe0n1", 00:20:33.949 "core_mask": "0x1", 00:20:33.949 "workload": "verify", 00:20:33.949 "status": "finished", 00:20:33.949 "verify_range": { 00:20:33.949 "start": 0, 00:20:33.949 "length": 16384 00:20:33.949 }, 00:20:33.949 "queue_depth": 128, 00:20:33.949 "io_size": 4096, 00:20:33.949 "runtime": 1.005246, 00:20:33.949 "iops": 7906.5223835757615, 00:20:33.949 "mibps": 30.88485306084282, 00:20:33.949 "io_failed": 0, 00:20:33.949 "io_timeout": 0, 00:20:33.949 "avg_latency_us": 16115.541617436415, 00:20:33.949 "min_latency_us": 1638.4, 00:20:33.949 "max_latency_us": 13812.972307692307 00:20:33.949 } 00:20:33.949 ], 00:20:33.949 "core_count": 1 00:20:33.949 } 00:20:33.949 14:46:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:33.949 [2024-11-04 14:46:36.503743] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:20:33.949 [2024-11-04 14:46:36.503820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74049 ] 00:20:33.949 [2024-11-04 14:46:36.632900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.950 [2024-11-04 14:46:36.665319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.950 [2024-11-04 14:46:36.695027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:33.950 [2024-11-04 14:46:38.619407] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:20:33.950 [2024-11-04 14:46:38.619491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.950 [2024-11-04 14:46:38.619504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.950 [2024-11-04 14:46:38.619513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.950 [2024-11-04 14:46:38.619520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.950 [2024-11-04 14:46:38.619528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.950 [2024-11-04 14:46:38.619535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.950 [2024-11-04 14:46:38.619543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.950 [2024-11-04 14:46:38.619549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.950 [2024-11-04 14:46:38.619557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:20:33.950 [2024-11-04 14:46:38.619582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:20:33.950 [2024-11-04 14:46:38.619597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b5710 (9): Bad file descriptor 00:20:33.950 [2024-11-04 14:46:38.621166] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:20:33.950 Running I/O for 1 seconds... 00:20:33.950 7820.00 IOPS, 30.55 MiB/s 00:20:33.950 Latency(us) 00:20:33.950 [2024-11-04T14:46:43.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.950 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:33.950 Verification LBA range: start 0x0 length 0x4000 00:20:33.950 NVMe0n1 : 1.01 7906.52 30.88 0.00 0.00 16115.54 1638.40 13812.97 00:20:33.950 [2024-11-04T14:46:43.090Z] =================================================================================================================== 00:20:33.950 [2024-11-04T14:46:43.090Z] Total : 7906.52 30.88 0.00 0.00 16115.54 1638.40 13812.97 00:20:33.950 14:46:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:33.950 14:46:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:20:34.208 14:46:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:34.465 14:46:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:34.465 14:46:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:20:34.465 14:46:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:34.723 14:46:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:20:38.005 14:46:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:38.005 14:46:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:20:38.005 14:46:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 74049 00:20:38.005 14:46:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 74049 ']' 00:20:38.005 14:46:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 74049 00:20:38.005 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:20:38.005 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:38.005 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74049 00:20:38.005 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:38.005 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:38.005 killing process with pid 74049 00:20:38.005 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74049' 00:20:38.005 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 74049 00:20:38.005 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 74049 00:20:38.005 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:20:38.263 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:38.263 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:20:38.263 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:38.263 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:20:38.263 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:38.263 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:38.534 rmmod nvme_tcp 00:20:38.534 rmmod nvme_fabrics 00:20:38.534 rmmod nvme_keyring 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 73794 ']' 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 73794 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 73794 ']' 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 73794 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73794 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:38.534 killing process with pid 73794 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73794' 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 73794 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 73794 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:38.534 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:20:38.792 00:20:38.792 real 0m30.904s 00:20:38.792 user 1m59.094s 00:20:38.792 sys 0m4.287s 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:38.792 ************************************ 00:20:38.792 END TEST nvmf_failover 00:20:38.792 ************************************ 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.792 ************************************ 00:20:38.792 START TEST nvmf_host_discovery 00:20:38.792 ************************************ 00:20:38.792 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:38.792 * Looking for test storage... 00:20:39.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.052 14:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:39.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.052 --rc genhtml_branch_coverage=1 00:20:39.052 --rc genhtml_function_coverage=1 00:20:39.052 --rc genhtml_legend=1 00:20:39.052 --rc geninfo_all_blocks=1 00:20:39.052 --rc geninfo_unexecuted_blocks=1 00:20:39.052 00:20:39.052 ' 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:39.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.052 --rc genhtml_branch_coverage=1 00:20:39.052 --rc genhtml_function_coverage=1 00:20:39.052 --rc genhtml_legend=1 00:20:39.052 --rc geninfo_all_blocks=1 00:20:39.052 --rc geninfo_unexecuted_blocks=1 00:20:39.052 00:20:39.052 ' 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:39.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.052 --rc genhtml_branch_coverage=1 00:20:39.052 --rc genhtml_function_coverage=1 00:20:39.052 --rc genhtml_legend=1 00:20:39.052 --rc geninfo_all_blocks=1 00:20:39.052 --rc geninfo_unexecuted_blocks=1 00:20:39.052 00:20:39.052 ' 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:39.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.052 --rc genhtml_branch_coverage=1 00:20:39.052 --rc genhtml_function_coverage=1 00:20:39.052 --rc genhtml_legend=1 00:20:39.052 --rc geninfo_all_blocks=1 00:20:39.052 --rc geninfo_unexecuted_blocks=1 00:20:39.052 00:20:39.052 ' 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.052 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:39.053 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:39.053 Cannot find device "nvmf_init_br" 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:39.053 Cannot find device "nvmf_init_br2" 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:39.053 Cannot find device "nvmf_tgt_br" 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:39.053 Cannot find device "nvmf_tgt_br2" 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:39.053 Cannot find device "nvmf_init_br" 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:39.053 Cannot find device "nvmf_init_br2" 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:39.053 Cannot find device "nvmf_tgt_br" 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:39.053 Cannot find device "nvmf_tgt_br2" 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:39.053 Cannot find device "nvmf_br" 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:39.053 Cannot find device "nvmf_init_if" 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:39.053 Cannot find device "nvmf_init_if2" 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:39.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:39.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:39.053 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:39.312 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:39.312 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:20:39.312 00:20:39.312 --- 10.0.0.3 ping statistics --- 00:20:39.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.312 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:39.312 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:39.312 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:20:39.312 00:20:39.312 --- 10.0.0.4 ping statistics --- 00:20:39.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.312 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:39.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:39.312 00:20:39.312 --- 10.0.0.1 ping statistics --- 00:20:39.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.312 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:39.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:20:39.312 00:20:39.312 --- 10.0.0.2 ping statistics --- 00:20:39.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.312 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=74445 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 74445 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 74445 ']' 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:39.312 14:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.312 [2024-11-04 14:46:48.369499] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:20:39.312 [2024-11-04 14:46:48.369553] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.570 [2024-11-04 14:46:48.507848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.570 [2024-11-04 14:46:48.544386] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.570 [2024-11-04 14:46:48.544432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.570 [2024-11-04 14:46:48.544438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.570 [2024-11-04 14:46:48.544443] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.570 [2024-11-04 14:46:48.544448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.570 [2024-11-04 14:46:48.544732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.570 [2024-11-04 14:46:48.577071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:40.134 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:40.134 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:20:40.134 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:40.134 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:40.134 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:40.134 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.134 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:40.134 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.134 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:40.391 [2024-11-04 14:46:49.274725] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:40.391 [2024-11-04 14:46:49.282848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:40.391 null0 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:40.391 null1 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=74477 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 74477 /tmp/host.sock 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 74477 ']' 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:40.391 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:40.391 14:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:40.391 [2024-11-04 14:46:49.348399] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:20:40.391 [2024-11-04 14:46:49.348468] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74477 ] 00:20:40.391 [2024-11-04 14:46:49.483544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.391 [2024-11-04 14:46:49.519660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.648 [2024-11-04 14:46:49.550185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:41.214 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:41.214 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:20:41.214 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:41.214 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:41.214 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.214 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.214 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.214 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:20:41.214 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.214 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.214 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.214 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:20:41.214 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:20:41.214 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:41.214 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:41.214 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:41.215 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.474 [2024-11-04 14:46:50.407056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:41.474 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:41.475 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:41.475 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:20:41.475 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:41.475 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.475 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:41.475 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:41.475 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:41.475 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:41.475 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.475 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:20:41.475 14:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:20:42.409 [2024-11-04 14:46:51.197361] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:42.409 [2024-11-04 14:46:51.197390] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:42.409 [2024-11-04 14:46:51.197403] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:42.409 [2024-11-04 14:46:51.203391] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:42.409 [2024-11-04 14:46:51.257706] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:20:42.409 [2024-11-04 14:46:51.258372] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xc82e50:1 started. 00:20:42.409 [2024-11-04 14:46:51.259640] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:42.409 [2024-11-04 14:46:51.259653] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:42.409 [2024-11-04 14:46:51.266050] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xc82e50 was disconnected and freed. delete nvme_qpair. 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:42.670 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:42.671 [2024-11-04 14:46:51.708994] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xc90f80:1 started. 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:42.671 [2024-11-04 14:46:51.716376] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xc90f80 was disconnected and freed. delete nvme_qpair. 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:42.671 [2024-11-04 14:46:51.780284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:42.671 [2024-11-04 14:46:51.780585] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:42.671 [2024-11-04 14:46:51.780616] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:42.671 [2024-11-04 14:46:51.786581] bdev_nvme.c:7306:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:42.671 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.931 [2024-11-04 14:46:51.850845] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:20:42.931 [2024-11-04 14:46:51.850883] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:42.931 [2024-11-04 14:46:51.850890] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:42.931 [2024-11-04 14:46:51.850894] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:42.931 [2024-11-04 14:46:51.948694] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:42.931 [2024-11-04 14:46:51.948716] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:42.931 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:42.932 [2024-11-04 14:46:51.954698] bdev_nvme.c:7169:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:20:42.932 [2024-11-04 14:46:51.954718] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:42.932 [2024-11-04 14:46:51.954777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.932 [2024-11-04 14:46:51.954796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.932 [2024-11-04 14:46:51.954802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.932 [2024-11-04 14:46:51.954807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.932 [2024-11-04 14:46:51.954812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.932 [2024-11-04 14:46:51.954817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.932 [2024-11-04 14:46:51.954823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.932 [2024-11-04 14:46:51.954828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.932 [2024-11-04 14:46:51.954833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f230 is same with the state(6) to be set 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:42.932 14:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:42.932 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.191 14:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.121 [2024-11-04 14:46:53.217803] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:44.122 [2024-11-04 14:46:53.217832] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:44.122 [2024-11-04 14:46:53.217842] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:44.122 [2024-11-04 14:46:53.223827] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:20:44.379 [2024-11-04 14:46:53.282063] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:20:44.379 [2024-11-04 14:46:53.282634] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xc57eb0:1 started. 00:20:44.379 [2024-11-04 14:46:53.284140] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:44.379 [2024-11-04 14:46:53.284170] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:44.379 [2024-11-04 14:46:53.286726] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xc57eb0 was disconnected and freed. delete nvme_qpair. 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.379 request: 00:20:44.379 { 00:20:44.379 "name": "nvme", 00:20:44.379 "trtype": "tcp", 00:20:44.379 "traddr": "10.0.0.3", 00:20:44.379 "adrfam": "ipv4", 00:20:44.379 "trsvcid": "8009", 00:20:44.379 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:44.379 "wait_for_attach": true, 00:20:44.379 "method": "bdev_nvme_start_discovery", 00:20:44.379 "req_id": 1 00:20:44.379 } 00:20:44.379 Got JSON-RPC error response 00:20:44.379 response: 00:20:44.379 { 00:20:44.379 "code": -17, 00:20:44.379 "message": "File exists" 00:20:44.379 } 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.379 request: 00:20:44.379 { 00:20:44.379 "name": "nvme_second", 00:20:44.379 "trtype": "tcp", 00:20:44.379 "traddr": "10.0.0.3", 00:20:44.379 "adrfam": "ipv4", 00:20:44.379 "trsvcid": "8009", 00:20:44.379 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:44.379 "wait_for_attach": true, 00:20:44.379 "method": "bdev_nvme_start_discovery", 00:20:44.379 "req_id": 1 00:20:44.379 } 00:20:44.379 Got JSON-RPC error response 00:20:44.379 response: 00:20:44.379 { 00:20:44.379 "code": -17, 00:20:44.379 "message": "File exists" 00:20:44.379 } 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.379 14:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:45.329 [2024-11-04 14:46:54.469166] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:45.329 [2024-11-04 14:46:54.469227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc83e40 with addr=10.0.0.3, port=8010 00:20:45.329 [2024-11-04 14:46:54.469241] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:45.329 [2024-11-04 14:46:54.469247] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:45.329 [2024-11-04 14:46:54.469252] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:20:46.724 [2024-11-04 14:46:55.469166] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.724 [2024-11-04 14:46:55.469221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc83e40 with addr=10.0.0.3, port=8010 00:20:46.724 [2024-11-04 14:46:55.469234] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:46.724 [2024-11-04 14:46:55.469240] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:46.724 [2024-11-04 14:46:55.469244] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:20:47.678 [2024-11-04 14:46:56.469079] bdev_nvme.c:7425:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:20:47.678 request: 00:20:47.678 { 00:20:47.678 "name": "nvme_second", 00:20:47.678 "trtype": "tcp", 00:20:47.678 "traddr": "10.0.0.3", 00:20:47.678 "adrfam": "ipv4", 00:20:47.678 "trsvcid": "8010", 00:20:47.678 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:47.678 "wait_for_attach": false, 00:20:47.678 "attach_timeout_ms": 3000, 00:20:47.678 "method": "bdev_nvme_start_discovery", 00:20:47.678 "req_id": 1 00:20:47.678 } 00:20:47.678 Got JSON-RPC error response 00:20:47.678 response: 00:20:47.678 { 00:20:47.678 "code": -110, 00:20:47.678 "message": "Connection timed out" 00:20:47.678 } 00:20:47.678 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:47.678 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:20:47.678 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:47.678 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:47.678 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:47.678 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:20:47.678 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:47.678 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.678 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.678 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:47.678 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:47.678 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:47.678 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.678 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:20:47.678 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:20:47.678 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 74477 00:20:47.678 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:20:47.678 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:47.678 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:20:47.935 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:47.935 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:20:47.935 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:47.935 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:47.935 rmmod nvme_tcp 00:20:47.935 rmmod nvme_fabrics 00:20:47.935 rmmod nvme_keyring 00:20:47.935 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:47.935 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:20:47.935 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:20:47.935 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 74445 ']' 00:20:47.936 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 74445 00:20:47.936 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 74445 ']' 00:20:47.936 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 74445 00:20:47.936 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:20:47.936 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:47.936 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74445 00:20:47.936 killing process with pid 74445 00:20:47.936 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:47.936 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:47.936 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74445' 00:20:47.936 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 74445 00:20:47.936 14:46:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 74445 00:20:47.936 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:47.936 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:47.936 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:47.936 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:20:47.936 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:20:47.936 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:47.936 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:20:47.936 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:47.936 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:47.936 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:47.936 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:47.936 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:47.936 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:20:48.193 00:20:48.193 real 0m9.348s 00:20:48.193 user 0m16.941s 00:20:48.193 sys 0m1.529s 00:20:48.193 ************************************ 00:20:48.193 END TEST nvmf_host_discovery 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:48.193 ************************************ 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.193 ************************************ 00:20:48.193 START TEST nvmf_host_multipath_status 00:20:48.193 ************************************ 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:48.193 * Looking for test storage... 00:20:48.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:20:48.193 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:48.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.452 --rc genhtml_branch_coverage=1 00:20:48.452 --rc genhtml_function_coverage=1 00:20:48.452 --rc genhtml_legend=1 00:20:48.452 --rc geninfo_all_blocks=1 00:20:48.452 --rc geninfo_unexecuted_blocks=1 00:20:48.452 00:20:48.452 ' 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:48.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.452 --rc genhtml_branch_coverage=1 00:20:48.452 --rc genhtml_function_coverage=1 00:20:48.452 --rc genhtml_legend=1 00:20:48.452 --rc geninfo_all_blocks=1 00:20:48.452 --rc geninfo_unexecuted_blocks=1 00:20:48.452 00:20:48.452 ' 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:48.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.452 --rc genhtml_branch_coverage=1 00:20:48.452 --rc genhtml_function_coverage=1 00:20:48.452 --rc genhtml_legend=1 00:20:48.452 --rc geninfo_all_blocks=1 00:20:48.452 --rc geninfo_unexecuted_blocks=1 00:20:48.452 00:20:48.452 ' 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:48.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.452 --rc genhtml_branch_coverage=1 00:20:48.452 --rc genhtml_function_coverage=1 00:20:48.452 --rc genhtml_legend=1 00:20:48.452 --rc geninfo_all_blocks=1 00:20:48.452 --rc geninfo_unexecuted_blocks=1 00:20:48.452 00:20:48.452 ' 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:48.452 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:48.452 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:48.453 Cannot find device "nvmf_init_br" 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:48.453 Cannot find device "nvmf_init_br2" 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:48.453 Cannot find device "nvmf_tgt_br" 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:48.453 Cannot find device "nvmf_tgt_br2" 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:48.453 Cannot find device "nvmf_init_br" 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:48.453 Cannot find device "nvmf_init_br2" 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:48.453 Cannot find device "nvmf_tgt_br" 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:48.453 Cannot find device "nvmf_tgt_br2" 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:48.453 Cannot find device "nvmf_br" 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:48.453 Cannot find device "nvmf_init_if" 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:48.453 Cannot find device "nvmf_init_if2" 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:48.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:48.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:48.453 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:48.713 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:48.713 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:48.713 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:48.713 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:48.713 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:48.713 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:48.713 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:48.713 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:48.713 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:48.713 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:48.713 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:48.714 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:48.714 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:20:48.714 00:20:48.714 --- 10.0.0.3 ping statistics --- 00:20:48.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.714 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:48.714 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:48.714 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:20:48.714 00:20:48.714 --- 10.0.0.4 ping statistics --- 00:20:48.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.714 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:48.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:48.714 00:20:48.714 --- 10.0.0.1 ping statistics --- 00:20:48.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.714 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:48.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:20:48.714 00:20:48.714 --- 10.0.0.2 ping statistics --- 00:20:48.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.714 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=74972 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 74972 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 74972 ']' 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:48.714 14:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:48.714 [2024-11-04 14:46:57.758056] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:20:48.714 [2024-11-04 14:46:57.758119] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.978 [2024-11-04 14:46:57.898624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:48.978 [2024-11-04 14:46:57.933687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.978 [2024-11-04 14:46:57.933849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.978 [2024-11-04 14:46:57.933971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.978 [2024-11-04 14:46:57.934001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.978 [2024-11-04 14:46:57.934017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.978 [2024-11-04 14:46:57.934757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.978 [2024-11-04 14:46:57.935033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.978 [2024-11-04 14:46:57.965944] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:49.544 14:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:49.544 14:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:20:49.544 14:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:49.544 14:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:49.544 14:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:49.544 14:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.544 14:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=74972 00:20:49.544 14:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:49.801 [2024-11-04 14:46:58.817433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.801 14:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:50.059 Malloc0 00:20:50.059 14:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:50.317 14:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:50.317 14:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:50.575 [2024-11-04 14:46:59.643886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:50.575 14:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:50.833 [2024-11-04 14:46:59.839978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:50.833 14:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=75022 00:20:50.833 14:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:50.833 14:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:50.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.833 14:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 75022 /var/tmp/bdevperf.sock 00:20:50.833 14:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 75022 ']' 00:20:50.833 14:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.833 14:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:50.833 14:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.833 14:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:50.833 14:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:51.766 14:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:51.766 14:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:20:51.766 14:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:52.023 14:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:52.286 Nvme0n1 00:20:52.286 14:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:52.543 Nvme0n1 00:20:52.543 14:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:52.543 14:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:20:54.442 14:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:20:54.442 14:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:54.718 14:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:54.977 14:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:20:55.910 14:47:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:20:55.910 14:47:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:55.910 14:47:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:55.910 14:47:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:56.180 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:56.180 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:56.180 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:56.180 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:56.437 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:56.437 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:56.437 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:56.437 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:56.438 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:56.438 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:56.438 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:56.438 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:56.696 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:56.696 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:56.696 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:56.696 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:56.954 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:56.954 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:56.954 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:56.954 14:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:57.211 14:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:57.211 14:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:20:57.211 14:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:57.469 14:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:57.469 14:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:20:58.849 14:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:20:58.849 14:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:58.849 14:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:58.849 14:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:58.849 14:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:58.849 14:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:58.849 14:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:58.849 14:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:59.106 14:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:59.106 14:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:59.106 14:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:59.106 14:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:59.106 14:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:59.106 14:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:59.106 14:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:59.106 14:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:59.364 14:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:59.364 14:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:59.364 14:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:59.364 14:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:59.622 14:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:59.622 14:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:59.622 14:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:59.622 14:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:59.880 14:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:59.880 14:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:20:59.880 14:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:00.138 14:47:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:21:00.138 14:47:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:21:01.511 14:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:01.511 14:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:01.511 14:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:01.511 14:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:01.511 14:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:01.511 14:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:01.511 14:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:01.511 14:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:01.769 14:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:01.769 14:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:01.769 14:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:01.769 14:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:01.769 14:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:01.769 14:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:01.769 14:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:01.769 14:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:02.027 14:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:02.027 14:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:02.027 14:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:02.027 14:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:02.285 14:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:02.285 14:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:02.285 14:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:02.285 14:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:02.542 14:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:02.542 14:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:02.542 14:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:02.800 14:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:02.801 14:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:21:04.173 14:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:21:04.173 14:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:04.173 14:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:04.173 14:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:04.173 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:04.173 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:04.173 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:04.173 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:04.431 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:04.431 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:04.431 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:04.431 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:04.431 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:04.431 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:04.431 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:04.431 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:04.688 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:04.688 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:04.688 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:04.688 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:04.946 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:04.946 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:04.946 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:04.946 14:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:05.203 14:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:05.203 14:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:05.203 14:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:05.460 14:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:05.718 14:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:21:06.651 14:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:06.651 14:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:06.651 14:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:06.651 14:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:06.909 14:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:06.909 14:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:06.909 14:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:06.909 14:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:06.909 14:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:06.909 14:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:06.909 14:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:06.909 14:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:07.167 14:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:07.167 14:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:07.168 14:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.168 14:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:07.426 14:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:07.426 14:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:07.426 14:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:07.426 14:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.683 14:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:07.683 14:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:07.683 14:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:07.683 14:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.942 14:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:07.942 14:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:07.942 14:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:08.200 14:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:08.458 14:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:09.391 14:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:09.391 14:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:09.391 14:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:09.391 14:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:09.658 14:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:09.658 14:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:09.658 14:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:09.658 14:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:09.915 14:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:09.915 14:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:09.915 14:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:09.915 14:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:09.915 14:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:09.915 14:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:09.915 14:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:09.915 14:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:10.173 14:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:10.173 14:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:10.173 14:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.173 14:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:10.430 14:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:10.430 14:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:10.430 14:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.430 14:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:10.688 14:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:10.688 14:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:10.945 14:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:10.945 14:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:21:11.202 14:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:11.202 14:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:21:12.576 14:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:12.577 14:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:12.577 14:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:12.577 14:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:12.577 14:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:12.577 14:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:12.577 14:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:12.577 14:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:12.834 14:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:12.834 14:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:12.834 14:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:12.834 14:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:12.834 14:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:12.834 14:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:12.834 14:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:12.834 14:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:13.092 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:13.092 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:13.092 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:13.092 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.349 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:13.349 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:13.349 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.350 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:13.607 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:13.607 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:13.607 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:13.865 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:13.865 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:21:15.252 14:47:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:21:15.252 14:47:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:15.252 14:47:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:15.252 14:47:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:15.252 14:47:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:15.252 14:47:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:15.253 14:47:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:15.253 14:47:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:15.511 14:47:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:15.511 14:47:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:15.511 14:47:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:15.511 14:47:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:15.511 14:47:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:15.511 14:47:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:15.511 14:47:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:15.511 14:47:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:15.770 14:47:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:15.770 14:47:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:15.770 14:47:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:15.770 14:47:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:16.028 14:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:16.028 14:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:16.028 14:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:16.028 14:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:16.287 14:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:16.287 14:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:16.288 14:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:16.545 14:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:21:16.545 14:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:21:17.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:21:17.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:17.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:17.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:17.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:17.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:18.185 14:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:18.185 14:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:18.185 14:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:18.185 14:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:18.185 14:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:18.185 14:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:18.185 14:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:18.185 14:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:18.442 14:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:18.443 14:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:18.443 14:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:18.443 14:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:18.700 14:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:18.700 14:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:18.700 14:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:18.700 14:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:18.959 14:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:18.959 14:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:21:18.959 14:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:19.217 14:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:19.217 14:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:21:20.590 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:21:20.590 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:20.590 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.590 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:20.590 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:20.590 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:20.590 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.590 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:20.848 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:20.848 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:20.848 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:20.848 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.848 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:20.848 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:20.848 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:20.848 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:21.104 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:21.104 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:21.104 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:21.105 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:21.363 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:21.363 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:21.363 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:21.363 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:21.624 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:21.624 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 75022 00:21:21.624 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 75022 ']' 00:21:21.624 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 75022 00:21:21.624 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:21:21.624 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:21.624 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75022 00:21:21.624 killing process with pid 75022 00:21:21.624 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:21.624 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:21.624 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75022' 00:21:21.624 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 75022 00:21:21.624 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 75022 00:21:21.624 { 00:21:21.624 "results": [ 00:21:21.624 { 00:21:21.624 "job": "Nvme0n1", 00:21:21.624 "core_mask": "0x4", 00:21:21.624 "workload": "verify", 00:21:21.624 "status": "terminated", 00:21:21.624 "verify_range": { 00:21:21.624 "start": 0, 00:21:21.624 "length": 16384 00:21:21.624 }, 00:21:21.624 "queue_depth": 128, 00:21:21.624 "io_size": 4096, 00:21:21.625 "runtime": 29.014627, 00:21:21.625 "iops": 11158.819997927252, 00:21:21.625 "mibps": 43.58914061690333, 00:21:21.625 "io_failed": 0, 00:21:21.625 "io_timeout": 0, 00:21:21.625 "avg_latency_us": 11450.295073339325, 00:21:21.625 "min_latency_us": 261.51384615384615, 00:21:21.625 "max_latency_us": 4026531.84 00:21:21.625 } 00:21:21.625 ], 00:21:21.625 "core_count": 1 00:21:21.625 } 00:21:21.625 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 75022 00:21:21.625 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:21.625 [2024-11-04 14:46:59.891994] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:21:21.625 [2024-11-04 14:46:59.892066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75022 ] 00:21:21.625 [2024-11-04 14:47:00.029896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.625 [2024-11-04 14:47:00.066097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.625 [2024-11-04 14:47:00.096366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:21.625 Running I/O for 90 seconds... 00:21:21.625 7957.00 IOPS, 31.08 MiB/s [2024-11-04T14:47:30.765Z] 8778.50 IOPS, 34.29 MiB/s [2024-11-04T14:47:30.765Z] 9180.00 IOPS, 35.86 MiB/s [2024-11-04T14:47:30.765Z] 9321.00 IOPS, 36.41 MiB/s [2024-11-04T14:47:30.765Z] 9602.00 IOPS, 37.51 MiB/s [2024-11-04T14:47:30.765Z] 10271.67 IOPS, 40.12 MiB/s [2024-11-04T14:47:30.765Z] 10735.14 IOPS, 41.93 MiB/s [2024-11-04T14:47:30.765Z] 11079.25 IOPS, 43.28 MiB/s [2024-11-04T14:47:30.765Z] 11315.67 IOPS, 44.20 MiB/s [2024-11-04T14:47:30.765Z] 11504.10 IOPS, 44.94 MiB/s [2024-11-04T14:47:30.765Z] 11663.36 IOPS, 45.56 MiB/s [2024-11-04T14:47:30.765Z] 11793.42 IOPS, 46.07 MiB/s [2024-11-04T14:47:30.765Z] [2024-11-04 14:47:14.384368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.625 [2024-11-04 14:47:14.384448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.384504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.625 [2024-11-04 14:47:14.384521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.384543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.625 [2024-11-04 14:47:14.384556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.384578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.625 [2024-11-04 14:47:14.384591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.384626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.625 [2024-11-04 14:47:14.384640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.384663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.625 [2024-11-04 14:47:14.384679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.384703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.625 [2024-11-04 14:47:14.384717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.384733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.625 [2024-11-04 14:47:14.384742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.384758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.625 [2024-11-04 14:47:14.384767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.384824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.625 [2024-11-04 14:47:14.384838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.384859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.625 [2024-11-04 14:47:14.384871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.384893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.625 [2024-11-04 14:47:14.384906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.384932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.625 [2024-11-04 14:47:14.384945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.384968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.625 [2024-11-04 14:47:14.384981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.385003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.625 [2024-11-04 14:47:14.385018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.385040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.625 [2024-11-04 14:47:14.385051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.385069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.625 [2024-11-04 14:47:14.385078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.385095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.625 [2024-11-04 14:47:14.385104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.385130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.625 [2024-11-04 14:47:14.385139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.385157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.625 [2024-11-04 14:47:14.385167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.385183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.625 [2024-11-04 14:47:14.385192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.385208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.625 [2024-11-04 14:47:14.385228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.385244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.625 [2024-11-04 14:47:14.385254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.385272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.625 [2024-11-04 14:47:14.385306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.385357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.625 [2024-11-04 14:47:14.385372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.385395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.625 [2024-11-04 14:47:14.385410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.385426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.625 [2024-11-04 14:47:14.385436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.385452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.625 [2024-11-04 14:47:14.385461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:21.625 [2024-11-04 14:47:14.385477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.385487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.385502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.385511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.385527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.385536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.385552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.385561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.385578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.626 [2024-11-04 14:47:14.385587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.385603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.626 [2024-11-04 14:47:14.385633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.385652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.626 [2024-11-04 14:47:14.385662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.385679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.626 [2024-11-04 14:47:14.385689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.385706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.626 [2024-11-04 14:47:14.385717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.385733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.626 [2024-11-04 14:47:14.385743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.385759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.626 [2024-11-04 14:47:14.385767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.385783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.626 [2024-11-04 14:47:14.385793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.385809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.385818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.385834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.385843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.385859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.385867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.385884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.385893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.385909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.385918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.385934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.385948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.385964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.385973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.385989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.385998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.386014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.626 [2024-11-04 14:47:14.386024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.386040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.626 [2024-11-04 14:47:14.386049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.386066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.626 [2024-11-04 14:47:14.386075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.386092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.626 [2024-11-04 14:47:14.386100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.386116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.626 [2024-11-04 14:47:14.386125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.386141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.626 [2024-11-04 14:47:14.386150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.386166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.626 [2024-11-04 14:47:14.386174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.386191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.626 [2024-11-04 14:47:14.386199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.386227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.386237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.386253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.386262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.386285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.386299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.386321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.386330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.386347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.386356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.386372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.386380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.386396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.386405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.386421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.626 [2024-11-04 14:47:14.386430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.386446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.626 [2024-11-04 14:47:14.386455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.386472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.626 [2024-11-04 14:47:14.386481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:21.626 [2024-11-04 14:47:14.386498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.627 [2024-11-04 14:47:14.386507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.386523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:29696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.627 [2024-11-04 14:47:14.386532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.386549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.627 [2024-11-04 14:47:14.386558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.386574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.627 [2024-11-04 14:47:14.386583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.386617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.627 [2024-11-04 14:47:14.386627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.386643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.627 [2024-11-04 14:47:14.386653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.386671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.386681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.386699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.386708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.386725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.386734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.386749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.386758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.386774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.386783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.386800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.386808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.386824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.386833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.386849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.386858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.386879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.627 [2024-11-04 14:47:14.386888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.386905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.627 [2024-11-04 14:47:14.386913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.386930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.627 [2024-11-04 14:47:14.386944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.386961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.627 [2024-11-04 14:47:14.386970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.386986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.627 [2024-11-04 14:47:14.386996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.387012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.627 [2024-11-04 14:47:14.387021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.387037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.627 [2024-11-04 14:47:14.387046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.387062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.627 [2024-11-04 14:47:14.387072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.387090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.387100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.387116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.387125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.387141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.387150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.387166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.387175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.387191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.387200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.387215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.387224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.387240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.387253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.387269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.387278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.387301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.387314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.387330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.387339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.387355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.387364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.387380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.387389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.387405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.387413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.387430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.387438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:21.627 [2024-11-04 14:47:14.387454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.627 [2024-11-04 14:47:14.387463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.387480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.628 [2024-11-04 14:47:14.387488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.387504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.628 [2024-11-04 14:47:14.387513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.387530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.628 [2024-11-04 14:47:14.387539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.387554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.628 [2024-11-04 14:47:14.387563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.387584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.628 [2024-11-04 14:47:14.387593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.387618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:29832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.628 [2024-11-04 14:47:14.387628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.387644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:29840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.628 [2024-11-04 14:47:14.387653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.387671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.628 [2024-11-04 14:47:14.387680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.388263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.628 [2024-11-04 14:47:14.388280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.388316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.628 [2024-11-04 14:47:14.388327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.388351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.628 [2024-11-04 14:47:14.388360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.388384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.628 [2024-11-04 14:47:14.388394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.388417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.628 [2024-11-04 14:47:14.388426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.388449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.628 [2024-11-04 14:47:14.388458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.388481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.628 [2024-11-04 14:47:14.388490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.388513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.628 [2024-11-04 14:47:14.388522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.388562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.628 [2024-11-04 14:47:14.388573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.388596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.628 [2024-11-04 14:47:14.388615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.388639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.628 [2024-11-04 14:47:14.388647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.388671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.628 [2024-11-04 14:47:14.388680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.388703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.628 [2024-11-04 14:47:14.388713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.388736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.628 [2024-11-04 14:47:14.388744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.388767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.628 [2024-11-04 14:47:14.388776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.388800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.628 [2024-11-04 14:47:14.388809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:14.388832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.628 [2024-11-04 14:47:14.388841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:21.628 11625.62 IOPS, 45.41 MiB/s [2024-11-04T14:47:30.768Z] 10795.21 IOPS, 42.17 MiB/s [2024-11-04T14:47:30.768Z] 10075.53 IOPS, 39.36 MiB/s [2024-11-04T14:47:30.768Z] 9445.81 IOPS, 36.90 MiB/s [2024-11-04T14:47:30.768Z] 9065.06 IOPS, 35.41 MiB/s [2024-11-04T14:47:30.768Z] 9255.22 IOPS, 36.15 MiB/s [2024-11-04T14:47:30.768Z] 9515.63 IOPS, 37.17 MiB/s [2024-11-04T14:47:30.768Z] 9818.85 IOPS, 38.35 MiB/s [2024-11-04T14:47:30.768Z] 10160.19 IOPS, 39.69 MiB/s [2024-11-04T14:47:30.768Z] 10300.14 IOPS, 40.23 MiB/s [2024-11-04T14:47:30.768Z] 10396.30 IOPS, 40.61 MiB/s [2024-11-04T14:47:30.768Z] 10484.79 IOPS, 40.96 MiB/s [2024-11-04T14:47:30.768Z] 10733.20 IOPS, 41.93 MiB/s [2024-11-04T14:47:30.768Z] 10904.27 IOPS, 42.59 MiB/s [2024-11-04T14:47:30.768Z] [2024-11-04 14:47:28.296416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.628 [2024-11-04 14:47:28.296477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:28.296516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.628 [2024-11-04 14:47:28.296528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:28.296569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:116832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.628 [2024-11-04 14:47:28.296579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:28.296595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:116848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.628 [2024-11-04 14:47:28.296603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:28.296629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.628 [2024-11-04 14:47:28.296638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:28.296654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.628 [2024-11-04 14:47:28.296662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:28.296678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.628 [2024-11-04 14:47:28.296687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:28.296702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:116896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.628 [2024-11-04 14:47:28.296711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:28.296727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.628 [2024-11-04 14:47:28.296735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:28.296751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.628 [2024-11-04 14:47:28.296760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:21.628 [2024-11-04 14:47:28.296776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.629 [2024-11-04 14:47:28.296784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.296799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.629 [2024-11-04 14:47:28.296808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.296823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.629 [2024-11-04 14:47:28.296832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.296847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:116984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.629 [2024-11-04 14:47:28.296856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.296878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.629 [2024-11-04 14:47:28.296888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.296904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.629 [2024-11-04 14:47:28.296913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.296929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.629 [2024-11-04 14:47:28.296937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.296954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.629 [2024-11-04 14:47:28.296964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.296981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.629 [2024-11-04 14:47:28.296990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.297006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.629 [2024-11-04 14:47:28.297015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.297030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.629 [2024-11-04 14:47:28.297039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.297055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.629 [2024-11-04 14:47:28.297064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.297080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.629 [2024-11-04 14:47:28.297088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.297104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.629 [2024-11-04 14:47:28.297113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.297129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.629 [2024-11-04 14:47:28.297138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.297154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.629 [2024-11-04 14:47:28.297162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.297178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.629 [2024-11-04 14:47:28.297192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.297208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.629 [2024-11-04 14:47:28.297217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.297233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.629 [2024-11-04 14:47:28.297241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.297257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.629 [2024-11-04 14:47:28.297266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.297281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.629 [2024-11-04 14:47:28.297290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.297306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.629 [2024-11-04 14:47:28.297315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.297346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.629 [2024-11-04 14:47:28.297360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.297383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.629 [2024-11-04 14:47:28.297397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.298508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.629 [2024-11-04 14:47:28.298529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.298548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.629 [2024-11-04 14:47:28.298557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.298573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.629 [2024-11-04 14:47:28.298582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.298598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.629 [2024-11-04 14:47:28.298620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.298637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.629 [2024-11-04 14:47:28.298655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.298671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:117160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.629 [2024-11-04 14:47:28.298680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:21.629 [2024-11-04 14:47:28.298697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.629 [2024-11-04 14:47:28.298706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:21.630 [2024-11-04 14:47:28.298722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.630 [2024-11-04 14:47:28.298731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:21.630 [2024-11-04 14:47:28.298747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.630 [2024-11-04 14:47:28.298756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:21.630 [2024-11-04 14:47:28.298772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.630 [2024-11-04 14:47:28.298781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:21.630 [2024-11-04 14:47:28.298797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.630 [2024-11-04 14:47:28.298806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:21.630 [2024-11-04 14:47:28.298822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:117200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.630 [2024-11-04 14:47:28.298831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:21.630 [2024-11-04 14:47:28.298847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.630 [2024-11-04 14:47:28.298856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:21.630 11016.93 IOPS, 43.03 MiB/s [2024-11-04T14:47:30.770Z] 11092.32 IOPS, 43.33 MiB/s [2024-11-04T14:47:30.770Z] 11160.86 IOPS, 43.60 MiB/s [2024-11-04T14:47:30.770Z] Received shutdown signal, test time was about 29.015280 seconds 00:21:21.630 00:21:21.630 Latency(us) 00:21:21.630 [2024-11-04T14:47:30.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.630 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:21.630 Verification LBA range: start 0x0 length 0x4000 00:21:21.630 Nvme0n1 : 29.01 11158.82 43.59 0.00 0.00 11450.30 261.51 4026531.84 00:21:21.630 [2024-11-04T14:47:30.770Z] =================================================================================================================== 00:21:21.630 [2024-11-04T14:47:30.770Z] Total : 11158.82 43.59 0.00 0.00 11450.30 261.51 4026531.84 00:21:21.630 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:21.887 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:21:21.887 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:21.887 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:21:21.887 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:21.887 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:21:21.887 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:21.888 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:21:21.888 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:21.888 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:21.888 rmmod nvme_tcp 00:21:21.888 rmmod nvme_fabrics 00:21:21.888 rmmod nvme_keyring 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 74972 ']' 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 74972 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 74972 ']' 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 74972 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74972 00:21:22.146 killing process with pid 74972 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74972' 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 74972 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 74972 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:22.146 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:22.405 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:22.405 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:22.405 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:22.405 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:22.405 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:22.405 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.405 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.405 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.405 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:21:22.405 00:21:22.405 real 0m34.153s 00:21:22.405 user 1m49.395s 00:21:22.405 sys 0m8.461s 00:21:22.405 ************************************ 00:21:22.405 END TEST nvmf_host_multipath_status 00:21:22.405 ************************************ 00:21:22.405 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:22.405 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:22.405 14:47:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:22.405 14:47:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:22.405 14:47:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:22.405 14:47:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.405 ************************************ 00:21:22.405 START TEST nvmf_discovery_remove_ifc 00:21:22.405 ************************************ 00:21:22.405 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:22.405 * Looking for test storage... 00:21:22.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:22.405 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:22.405 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:21:22.405 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:22.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.664 --rc genhtml_branch_coverage=1 00:21:22.664 --rc genhtml_function_coverage=1 00:21:22.664 --rc genhtml_legend=1 00:21:22.664 --rc geninfo_all_blocks=1 00:21:22.664 --rc geninfo_unexecuted_blocks=1 00:21:22.664 00:21:22.664 ' 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:22.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.664 --rc genhtml_branch_coverage=1 00:21:22.664 --rc genhtml_function_coverage=1 00:21:22.664 --rc genhtml_legend=1 00:21:22.664 --rc geninfo_all_blocks=1 00:21:22.664 --rc geninfo_unexecuted_blocks=1 00:21:22.664 00:21:22.664 ' 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:22.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.664 --rc genhtml_branch_coverage=1 00:21:22.664 --rc genhtml_function_coverage=1 00:21:22.664 --rc genhtml_legend=1 00:21:22.664 --rc geninfo_all_blocks=1 00:21:22.664 --rc geninfo_unexecuted_blocks=1 00:21:22.664 00:21:22.664 ' 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:22.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.664 --rc genhtml_branch_coverage=1 00:21:22.664 --rc genhtml_function_coverage=1 00:21:22.664 --rc genhtml_legend=1 00:21:22.664 --rc geninfo_all_blocks=1 00:21:22.664 --rc geninfo_unexecuted_blocks=1 00:21:22.664 00:21:22.664 ' 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.664 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:22.665 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:22.665 Cannot find device "nvmf_init_br" 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:22.665 Cannot find device "nvmf_init_br2" 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:22.665 Cannot find device "nvmf_tgt_br" 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:22.665 Cannot find device "nvmf_tgt_br2" 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:22.665 Cannot find device "nvmf_init_br" 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:22.665 Cannot find device "nvmf_init_br2" 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:22.665 Cannot find device "nvmf_tgt_br" 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:22.665 Cannot find device "nvmf_tgt_br2" 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:22.665 Cannot find device "nvmf_br" 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:22.665 Cannot find device "nvmf_init_if" 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:22.665 Cannot find device "nvmf_init_if2" 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:22.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:22.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:22.665 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:22.923 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:22.923 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:21:22.923 00:21:22.923 --- 10.0.0.3 ping statistics --- 00:21:22.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.923 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:22.923 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:22.923 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.025 ms 00:21:22.923 00:21:22.923 --- 10.0.0.4 ping statistics --- 00:21:22.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.923 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:21:22.923 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:22.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.014 ms 00:21:22.923 00:21:22.923 --- 10.0.0.1 ping statistics --- 00:21:22.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.924 rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:22.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:21:22.924 00:21:22.924 --- 10.0.0.2 ping statistics --- 00:21:22.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.924 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=75818 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 75818 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 75818 ']' 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:22.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:22.924 14:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:22.924 [2024-11-04 14:47:31.964543] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:21:22.924 [2024-11-04 14:47:31.964614] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.183 [2024-11-04 14:47:32.097820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.183 [2024-11-04 14:47:32.132007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.183 [2024-11-04 14:47:32.132050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.183 [2024-11-04 14:47:32.132057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.183 [2024-11-04 14:47:32.132062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.183 [2024-11-04 14:47:32.132066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.183 [2024-11-04 14:47:32.132331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.183 [2024-11-04 14:47:32.162100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:23.749 14:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:23.749 14:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:21:23.749 14:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:23.749 14:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:23.749 14:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:23.749 14:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.749 14:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:23.749 14:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.749 14:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:24.006 [2024-11-04 14:47:32.897142] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.006 [2024-11-04 14:47:32.905216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:21:24.006 null0 00:21:24.006 [2024-11-04 14:47:32.937171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:24.006 14:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.006 14:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=75850 00:21:24.006 14:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 75850 /tmp/host.sock 00:21:24.006 14:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 75850 ']' 00:21:24.006 14:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:21:24.006 14:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:24.006 14:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:24.006 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:24.006 14:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:24.006 14:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:24.006 14:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:24.006 [2024-11-04 14:47:32.998334] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:21:24.006 [2024-11-04 14:47:32.998393] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75850 ] 00:21:24.006 [2024-11-04 14:47:33.138822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.264 [2024-11-04 14:47:33.174295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.876 14:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:24.876 14:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:21:24.876 14:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:24.876 14:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:24.876 14:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.876 14:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:24.876 14:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.876 14:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:24.876 14:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.876 14:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:24.876 [2024-11-04 14:47:33.912694] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:24.876 14:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.876 14:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:24.876 14:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.876 14:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:26.249 [2024-11-04 14:47:34.949786] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:26.249 [2024-11-04 14:47:34.949813] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:26.249 [2024-11-04 14:47:34.949826] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:26.249 [2024-11-04 14:47:34.955820] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:21:26.249 [2024-11-04 14:47:35.010124] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:21:26.249 [2024-11-04 14:47:35.010879] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x12a0fb0:1 started. 00:21:26.249 [2024-11-04 14:47:35.012383] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:26.249 [2024-11-04 14:47:35.012427] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:26.249 [2024-11-04 14:47:35.012445] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:26.249 [2024-11-04 14:47:35.012458] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:21:26.249 [2024-11-04 14:47:35.012478] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:26.249 [2024-11-04 14:47:35.018508] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x12a0fb0 was disconnected and freed. delete nvme_qpair. 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:26.249 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.250 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:26.250 14:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:27.182 14:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:27.182 14:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:27.182 14:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:27.182 14:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:27.182 14:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.182 14:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:27.182 14:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:27.182 14:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.182 14:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:27.182 14:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:28.114 14:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:28.114 14:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:28.114 14:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:28.114 14:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.114 14:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:28.114 14:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:28.114 14:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:28.114 14:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.114 14:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:28.114 14:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:29.047 14:47:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:29.047 14:47:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:29.047 14:47:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.047 14:47:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:29.047 14:47:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:29.047 14:47:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:29.047 14:47:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:29.304 14:47:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.304 14:47:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:29.304 14:47:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:30.260 14:47:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:30.260 14:47:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:30.261 14:47:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:30.261 14:47:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.261 14:47:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:30.261 14:47:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:30.261 14:47:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:30.261 14:47:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.261 14:47:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:30.261 14:47:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:31.192 14:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:31.192 14:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:31.192 14:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.192 14:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:31.192 14:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:31.192 14:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:31.192 14:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:31.192 14:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.192 14:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:31.192 14:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:31.450 [2024-11-04 14:47:40.440559] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:21:31.450 [2024-11-04 14:47:40.440626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.450 [2024-11-04 14:47:40.440635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-11-04 14:47:40.440643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.450 [2024-11-04 14:47:40.440648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-11-04 14:47:40.440654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.450 [2024-11-04 14:47:40.440659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-11-04 14:47:40.440664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.450 [2024-11-04 14:47:40.440669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-11-04 14:47:40.440674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.450 [2024-11-04 14:47:40.440682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-11-04 14:47:40.440687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d240 is same with the state(6) to be set 00:21:31.450 [2024-11-04 14:47:40.450554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127d240 (9): Bad file descriptor 00:21:31.450 [2024-11-04 14:47:40.460571] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:31.450 [2024-11-04 14:47:40.460589] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:31.450 [2024-11-04 14:47:40.460594] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:31.450 [2024-11-04 14:47:40.460597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:31.450 [2024-11-04 14:47:40.460627] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:32.384 14:47:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:32.384 14:47:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:32.384 14:47:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:32.384 14:47:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.384 14:47:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:32.384 14:47:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:32.385 14:47:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:32.643 [2024-11-04 14:47:41.525680] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:21:32.643 [2024-11-04 14:47:41.525810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127d240 with addr=10.0.0.3, port=4420 00:21:32.643 [2024-11-04 14:47:41.525842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d240 is same with the state(6) to be set 00:21:32.643 [2024-11-04 14:47:41.525906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127d240 (9): Bad file descriptor 00:21:32.643 [2024-11-04 14:47:41.527071] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:21:32.643 [2024-11-04 14:47:41.527151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:32.643 [2024-11-04 14:47:41.527171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:32.643 [2024-11-04 14:47:41.527189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:32.643 [2024-11-04 14:47:41.527207] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:32.643 [2024-11-04 14:47:41.527221] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:32.643 [2024-11-04 14:47:41.527230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:32.643 [2024-11-04 14:47:41.527250] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:32.643 [2024-11-04 14:47:41.527261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:32.643 14:47:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.643 14:47:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:32.643 14:47:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:33.577 [2024-11-04 14:47:42.527334] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:33.577 [2024-11-04 14:47:42.527377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:33.577 [2024-11-04 14:47:42.527400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:33.577 [2024-11-04 14:47:42.527407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:33.577 [2024-11-04 14:47:42.527414] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:21:33.577 [2024-11-04 14:47:42.527421] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:33.577 [2024-11-04 14:47:42.527426] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:33.577 [2024-11-04 14:47:42.527429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:33.577 [2024-11-04 14:47:42.527453] bdev_nvme.c:7133:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:21:33.577 [2024-11-04 14:47:42.527489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.577 [2024-11-04 14:47:42.527498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.577 [2024-11-04 14:47:42.527507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.577 [2024-11-04 14:47:42.527513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.577 [2024-11-04 14:47:42.527519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.577 [2024-11-04 14:47:42.527524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.577 [2024-11-04 14:47:42.527530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.577 [2024-11-04 14:47:42.527536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.577 [2024-11-04 14:47:42.527543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.577 [2024-11-04 14:47:42.527548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.577 [2024-11-04 14:47:42.527555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:21:33.577 [2024-11-04 14:47:42.528106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1208a20 (9): Bad file descriptor 00:21:33.577 [2024-11-04 14:47:42.529116] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:33.577 [2024-11-04 14:47:42.529136] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:21:33.577 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:33.577 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:33.577 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:33.577 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.577 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:33.577 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:33.577 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:33.577 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.577 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:21:33.577 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:33.577 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:33.577 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:21:33.577 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:33.577 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:33.577 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:33.577 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:33.577 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.577 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:33.578 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:33.578 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.578 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:33.578 14:47:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:34.509 14:47:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:34.509 14:47:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:34.509 14:47:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:34.509 14:47:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:34.509 14:47:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:34.509 14:47:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.509 14:47:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:34.768 14:47:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.768 14:47:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:34.768 14:47:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:35.701 [2024-11-04 14:47:44.532776] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:35.701 [2024-11-04 14:47:44.532802] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:35.701 [2024-11-04 14:47:44.532812] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:35.701 [2024-11-04 14:47:44.538799] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:21:35.701 [2024-11-04 14:47:44.593013] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:21:35.701 [2024-11-04 14:47:44.593556] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x12a9290:1 started. 00:21:35.701 [2024-11-04 14:47:44.594461] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:35.701 [2024-11-04 14:47:44.594492] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:35.701 [2024-11-04 14:47:44.594507] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:35.701 [2024-11-04 14:47:44.594517] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:21:35.701 [2024-11-04 14:47:44.594522] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:35.701 [2024-11-04 14:47:44.601562] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x12a9290 was disconnected and freed. delete nvme_qpair. 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 75850 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 75850 ']' 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 75850 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75850 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:35.701 killing process with pid 75850 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75850' 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 75850 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 75850 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:35.701 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:21:35.960 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:35.960 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:21:35.960 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:35.960 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:35.960 rmmod nvme_tcp 00:21:35.960 rmmod nvme_fabrics 00:21:35.960 rmmod nvme_keyring 00:21:35.960 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:35.960 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:21:35.960 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:21:35.960 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 75818 ']' 00:21:35.960 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 75818 00:21:35.960 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 75818 ']' 00:21:35.960 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 75818 00:21:35.960 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:21:35.960 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:35.960 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75818 00:21:35.960 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:35.960 killing process with pid 75818 00:21:35.960 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:35.960 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75818' 00:21:35.960 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 75818 00:21:35.960 14:47:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 75818 00:21:35.960 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:35.960 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:35.960 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:35.960 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:21:35.960 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:35.960 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:21:35.960 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:21:36.218 00:21:36.218 real 0m13.871s 00:21:36.218 user 0m23.781s 00:21:36.218 sys 0m1.979s 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:36.218 ************************************ 00:21:36.218 END TEST nvmf_discovery_remove_ifc 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:36.218 ************************************ 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.218 ************************************ 00:21:36.218 START TEST nvmf_identify_kernel_target 00:21:36.218 ************************************ 00:21:36.218 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:36.476 * Looking for test storage... 00:21:36.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:36.476 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:36.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.477 --rc genhtml_branch_coverage=1 00:21:36.477 --rc genhtml_function_coverage=1 00:21:36.477 --rc genhtml_legend=1 00:21:36.477 --rc geninfo_all_blocks=1 00:21:36.477 --rc geninfo_unexecuted_blocks=1 00:21:36.477 00:21:36.477 ' 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:36.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.477 --rc genhtml_branch_coverage=1 00:21:36.477 --rc genhtml_function_coverage=1 00:21:36.477 --rc genhtml_legend=1 00:21:36.477 --rc geninfo_all_blocks=1 00:21:36.477 --rc geninfo_unexecuted_blocks=1 00:21:36.477 00:21:36.477 ' 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:36.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.477 --rc genhtml_branch_coverage=1 00:21:36.477 --rc genhtml_function_coverage=1 00:21:36.477 --rc genhtml_legend=1 00:21:36.477 --rc geninfo_all_blocks=1 00:21:36.477 --rc geninfo_unexecuted_blocks=1 00:21:36.477 00:21:36.477 ' 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:36.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.477 --rc genhtml_branch_coverage=1 00:21:36.477 --rc genhtml_function_coverage=1 00:21:36.477 --rc genhtml_legend=1 00:21:36.477 --rc geninfo_all_blocks=1 00:21:36.477 --rc geninfo_unexecuted_blocks=1 00:21:36.477 00:21:36.477 ' 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:36.477 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:36.477 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:36.478 Cannot find device "nvmf_init_br" 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:36.478 Cannot find device "nvmf_init_br2" 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:36.478 Cannot find device "nvmf_tgt_br" 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:36.478 Cannot find device "nvmf_tgt_br2" 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:36.478 Cannot find device "nvmf_init_br" 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:36.478 Cannot find device "nvmf_init_br2" 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:36.478 Cannot find device "nvmf_tgt_br" 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:36.478 Cannot find device "nvmf_tgt_br2" 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:36.478 Cannot find device "nvmf_br" 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:36.478 Cannot find device "nvmf_init_if" 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:36.478 Cannot find device "nvmf_init_if2" 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:36.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:36.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:21:36.478 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:36.736 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:36.736 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:21:36.736 00:21:36.736 --- 10.0.0.3 ping statistics --- 00:21:36.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.736 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:36.736 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:36.736 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:21:36.736 00:21:36.736 --- 10.0.0.4 ping statistics --- 00:21:36.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.736 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:36.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:21:36.736 00:21:36.736 --- 10.0.0.1 ping statistics --- 00:21:36.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.736 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:36.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:21:36.736 00:21:36.736 --- 10.0.0.2 ping statistics --- 00:21:36.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.736 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:36.736 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:36.737 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:36.737 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:21:36.737 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:36.737 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:36.737 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:36.737 14:47:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:37.036 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:37.036 Waiting for block devices as requested 00:21:37.036 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:37.294 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:37.294 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:37.294 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:37.294 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:37.294 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:21:37.294 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:37.294 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:37.294 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:37.294 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:37.294 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:37.294 No valid GPT data, bailing 00:21:37.294 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:37.294 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:37.294 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:37.294 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:37.294 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:37.294 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:37.294 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:37.294 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:37.295 No valid GPT data, bailing 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:37.295 No valid GPT data, bailing 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:37.295 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:37.295 No valid GPT data, bailing 00:21:37.553 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:37.553 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:37.553 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:37.553 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:37.553 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:37.553 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:37.553 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:37.553 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:37.553 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:37.553 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:21:37.553 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:37.553 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:21:37.553 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:37.553 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:21:37.553 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:21:37.553 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:21:37.553 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:37.553 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid=0c7d476c-d4d7-4594-a48a-578d93697ffa -a 10.0.0.1 -t tcp -s 4420 00:21:37.553 00:21:37.553 Discovery Log Number of Records 2, Generation counter 2 00:21:37.553 =====Discovery Log Entry 0====== 00:21:37.553 trtype: tcp 00:21:37.553 adrfam: ipv4 00:21:37.553 subtype: current discovery subsystem 00:21:37.553 treq: not specified, sq flow control disable supported 00:21:37.553 portid: 1 00:21:37.553 trsvcid: 4420 00:21:37.553 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:37.553 traddr: 10.0.0.1 00:21:37.553 eflags: none 00:21:37.553 sectype: none 00:21:37.553 =====Discovery Log Entry 1====== 00:21:37.553 trtype: tcp 00:21:37.553 adrfam: ipv4 00:21:37.553 subtype: nvme subsystem 00:21:37.553 treq: not specified, sq flow control disable supported 00:21:37.553 portid: 1 00:21:37.553 trsvcid: 4420 00:21:37.553 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:37.553 traddr: 10.0.0.1 00:21:37.553 eflags: none 00:21:37.553 sectype: none 00:21:37.553 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:21:37.554 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:21:37.554 ===================================================== 00:21:37.554 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:37.554 ===================================================== 00:21:37.554 Controller Capabilities/Features 00:21:37.554 ================================ 00:21:37.554 Vendor ID: 0000 00:21:37.554 Subsystem Vendor ID: 0000 00:21:37.554 Serial Number: a9344f3960e94d8fcd62 00:21:37.554 Model Number: Linux 00:21:37.554 Firmware Version: 6.8.9-20 00:21:37.554 Recommended Arb Burst: 0 00:21:37.554 IEEE OUI Identifier: 00 00 00 00:21:37.554 Multi-path I/O 00:21:37.554 May have multiple subsystem ports: No 00:21:37.554 May have multiple controllers: No 00:21:37.554 Associated with SR-IOV VF: No 00:21:37.554 Max Data Transfer Size: Unlimited 00:21:37.554 Max Number of Namespaces: 0 00:21:37.554 Max Number of I/O Queues: 1024 00:21:37.554 NVMe Specification Version (VS): 1.3 00:21:37.554 NVMe Specification Version (Identify): 1.3 00:21:37.554 Maximum Queue Entries: 1024 00:21:37.554 Contiguous Queues Required: No 00:21:37.554 Arbitration Mechanisms Supported 00:21:37.554 Weighted Round Robin: Not Supported 00:21:37.554 Vendor Specific: Not Supported 00:21:37.554 Reset Timeout: 7500 ms 00:21:37.554 Doorbell Stride: 4 bytes 00:21:37.554 NVM Subsystem Reset: Not Supported 00:21:37.554 Command Sets Supported 00:21:37.554 NVM Command Set: Supported 00:21:37.554 Boot Partition: Not Supported 00:21:37.554 Memory Page Size Minimum: 4096 bytes 00:21:37.554 Memory Page Size Maximum: 4096 bytes 00:21:37.554 Persistent Memory Region: Not Supported 00:21:37.554 Optional Asynchronous Events Supported 00:21:37.554 Namespace Attribute Notices: Not Supported 00:21:37.554 Firmware Activation Notices: Not Supported 00:21:37.554 ANA Change Notices: Not Supported 00:21:37.554 PLE Aggregate Log Change Notices: Not Supported 00:21:37.554 LBA Status Info Alert Notices: Not Supported 00:21:37.554 EGE Aggregate Log Change Notices: Not Supported 00:21:37.554 Normal NVM Subsystem Shutdown event: Not Supported 00:21:37.554 Zone Descriptor Change Notices: Not Supported 00:21:37.554 Discovery Log Change Notices: Supported 00:21:37.554 Controller Attributes 00:21:37.554 128-bit Host Identifier: Not Supported 00:21:37.554 Non-Operational Permissive Mode: Not Supported 00:21:37.554 NVM Sets: Not Supported 00:21:37.554 Read Recovery Levels: Not Supported 00:21:37.554 Endurance Groups: Not Supported 00:21:37.554 Predictable Latency Mode: Not Supported 00:21:37.554 Traffic Based Keep ALive: Not Supported 00:21:37.554 Namespace Granularity: Not Supported 00:21:37.554 SQ Associations: Not Supported 00:21:37.554 UUID List: Not Supported 00:21:37.554 Multi-Domain Subsystem: Not Supported 00:21:37.554 Fixed Capacity Management: Not Supported 00:21:37.554 Variable Capacity Management: Not Supported 00:21:37.554 Delete Endurance Group: Not Supported 00:21:37.554 Delete NVM Set: Not Supported 00:21:37.554 Extended LBA Formats Supported: Not Supported 00:21:37.554 Flexible Data Placement Supported: Not Supported 00:21:37.554 00:21:37.554 Controller Memory Buffer Support 00:21:37.554 ================================ 00:21:37.554 Supported: No 00:21:37.554 00:21:37.554 Persistent Memory Region Support 00:21:37.554 ================================ 00:21:37.554 Supported: No 00:21:37.554 00:21:37.554 Admin Command Set Attributes 00:21:37.554 ============================ 00:21:37.554 Security Send/Receive: Not Supported 00:21:37.554 Format NVM: Not Supported 00:21:37.554 Firmware Activate/Download: Not Supported 00:21:37.554 Namespace Management: Not Supported 00:21:37.554 Device Self-Test: Not Supported 00:21:37.554 Directives: Not Supported 00:21:37.554 NVMe-MI: Not Supported 00:21:37.554 Virtualization Management: Not Supported 00:21:37.554 Doorbell Buffer Config: Not Supported 00:21:37.554 Get LBA Status Capability: Not Supported 00:21:37.554 Command & Feature Lockdown Capability: Not Supported 00:21:37.554 Abort Command Limit: 1 00:21:37.554 Async Event Request Limit: 1 00:21:37.554 Number of Firmware Slots: N/A 00:21:37.554 Firmware Slot 1 Read-Only: N/A 00:21:37.554 Firmware Activation Without Reset: N/A 00:21:37.554 Multiple Update Detection Support: N/A 00:21:37.554 Firmware Update Granularity: No Information Provided 00:21:37.554 Per-Namespace SMART Log: No 00:21:37.554 Asymmetric Namespace Access Log Page: Not Supported 00:21:37.554 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:37.554 Command Effects Log Page: Not Supported 00:21:37.554 Get Log Page Extended Data: Supported 00:21:37.554 Telemetry Log Pages: Not Supported 00:21:37.554 Persistent Event Log Pages: Not Supported 00:21:37.554 Supported Log Pages Log Page: May Support 00:21:37.554 Commands Supported & Effects Log Page: Not Supported 00:21:37.554 Feature Identifiers & Effects Log Page:May Support 00:21:37.554 NVMe-MI Commands & Effects Log Page: May Support 00:21:37.554 Data Area 4 for Telemetry Log: Not Supported 00:21:37.554 Error Log Page Entries Supported: 1 00:21:37.554 Keep Alive: Not Supported 00:21:37.554 00:21:37.554 NVM Command Set Attributes 00:21:37.554 ========================== 00:21:37.554 Submission Queue Entry Size 00:21:37.554 Max: 1 00:21:37.554 Min: 1 00:21:37.554 Completion Queue Entry Size 00:21:37.554 Max: 1 00:21:37.554 Min: 1 00:21:37.554 Number of Namespaces: 0 00:21:37.554 Compare Command: Not Supported 00:21:37.554 Write Uncorrectable Command: Not Supported 00:21:37.554 Dataset Management Command: Not Supported 00:21:37.554 Write Zeroes Command: Not Supported 00:21:37.554 Set Features Save Field: Not Supported 00:21:37.554 Reservations: Not Supported 00:21:37.554 Timestamp: Not Supported 00:21:37.554 Copy: Not Supported 00:21:37.554 Volatile Write Cache: Not Present 00:21:37.554 Atomic Write Unit (Normal): 1 00:21:37.554 Atomic Write Unit (PFail): 1 00:21:37.554 Atomic Compare & Write Unit: 1 00:21:37.554 Fused Compare & Write: Not Supported 00:21:37.554 Scatter-Gather List 00:21:37.554 SGL Command Set: Supported 00:21:37.554 SGL Keyed: Not Supported 00:21:37.554 SGL Bit Bucket Descriptor: Not Supported 00:21:37.554 SGL Metadata Pointer: Not Supported 00:21:37.554 Oversized SGL: Not Supported 00:21:37.554 SGL Metadata Address: Not Supported 00:21:37.554 SGL Offset: Supported 00:21:37.554 Transport SGL Data Block: Not Supported 00:21:37.554 Replay Protected Memory Block: Not Supported 00:21:37.554 00:21:37.554 Firmware Slot Information 00:21:37.554 ========================= 00:21:37.554 Active slot: 0 00:21:37.554 00:21:37.554 00:21:37.554 Error Log 00:21:37.554 ========= 00:21:37.554 00:21:37.554 Active Namespaces 00:21:37.554 ================= 00:21:37.554 Discovery Log Page 00:21:37.554 ================== 00:21:37.554 Generation Counter: 2 00:21:37.554 Number of Records: 2 00:21:37.554 Record Format: 0 00:21:37.554 00:21:37.554 Discovery Log Entry 0 00:21:37.554 ---------------------- 00:21:37.554 Transport Type: 3 (TCP) 00:21:37.554 Address Family: 1 (IPv4) 00:21:37.554 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:37.554 Entry Flags: 00:21:37.554 Duplicate Returned Information: 0 00:21:37.554 Explicit Persistent Connection Support for Discovery: 0 00:21:37.554 Transport Requirements: 00:21:37.554 Secure Channel: Not Specified 00:21:37.554 Port ID: 1 (0x0001) 00:21:37.554 Controller ID: 65535 (0xffff) 00:21:37.554 Admin Max SQ Size: 32 00:21:37.554 Transport Service Identifier: 4420 00:21:37.554 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:37.554 Transport Address: 10.0.0.1 00:21:37.554 Discovery Log Entry 1 00:21:37.554 ---------------------- 00:21:37.554 Transport Type: 3 (TCP) 00:21:37.554 Address Family: 1 (IPv4) 00:21:37.554 Subsystem Type: 2 (NVM Subsystem) 00:21:37.554 Entry Flags: 00:21:37.554 Duplicate Returned Information: 0 00:21:37.554 Explicit Persistent Connection Support for Discovery: 0 00:21:37.554 Transport Requirements: 00:21:37.554 Secure Channel: Not Specified 00:21:37.554 Port ID: 1 (0x0001) 00:21:37.554 Controller ID: 65535 (0xffff) 00:21:37.554 Admin Max SQ Size: 32 00:21:37.554 Transport Service Identifier: 4420 00:21:37.554 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:21:37.554 Transport Address: 10.0.0.1 00:21:37.554 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:37.813 get_feature(0x01) failed 00:21:37.813 get_feature(0x02) failed 00:21:37.813 get_feature(0x04) failed 00:21:37.813 ===================================================== 00:21:37.813 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:37.813 ===================================================== 00:21:37.813 Controller Capabilities/Features 00:21:37.813 ================================ 00:21:37.813 Vendor ID: 0000 00:21:37.813 Subsystem Vendor ID: 0000 00:21:37.813 Serial Number: c6636085586cceb4b506 00:21:37.813 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:21:37.813 Firmware Version: 6.8.9-20 00:21:37.813 Recommended Arb Burst: 6 00:21:37.813 IEEE OUI Identifier: 00 00 00 00:21:37.813 Multi-path I/O 00:21:37.813 May have multiple subsystem ports: Yes 00:21:37.813 May have multiple controllers: Yes 00:21:37.813 Associated with SR-IOV VF: No 00:21:37.813 Max Data Transfer Size: Unlimited 00:21:37.813 Max Number of Namespaces: 1024 00:21:37.813 Max Number of I/O Queues: 128 00:21:37.813 NVMe Specification Version (VS): 1.3 00:21:37.813 NVMe Specification Version (Identify): 1.3 00:21:37.813 Maximum Queue Entries: 1024 00:21:37.813 Contiguous Queues Required: No 00:21:37.813 Arbitration Mechanisms Supported 00:21:37.813 Weighted Round Robin: Not Supported 00:21:37.813 Vendor Specific: Not Supported 00:21:37.813 Reset Timeout: 7500 ms 00:21:37.813 Doorbell Stride: 4 bytes 00:21:37.813 NVM Subsystem Reset: Not Supported 00:21:37.813 Command Sets Supported 00:21:37.813 NVM Command Set: Supported 00:21:37.813 Boot Partition: Not Supported 00:21:37.813 Memory Page Size Minimum: 4096 bytes 00:21:37.813 Memory Page Size Maximum: 4096 bytes 00:21:37.813 Persistent Memory Region: Not Supported 00:21:37.813 Optional Asynchronous Events Supported 00:21:37.813 Namespace Attribute Notices: Supported 00:21:37.813 Firmware Activation Notices: Not Supported 00:21:37.813 ANA Change Notices: Supported 00:21:37.813 PLE Aggregate Log Change Notices: Not Supported 00:21:37.813 LBA Status Info Alert Notices: Not Supported 00:21:37.813 EGE Aggregate Log Change Notices: Not Supported 00:21:37.813 Normal NVM Subsystem Shutdown event: Not Supported 00:21:37.813 Zone Descriptor Change Notices: Not Supported 00:21:37.813 Discovery Log Change Notices: Not Supported 00:21:37.813 Controller Attributes 00:21:37.813 128-bit Host Identifier: Supported 00:21:37.813 Non-Operational Permissive Mode: Not Supported 00:21:37.813 NVM Sets: Not Supported 00:21:37.813 Read Recovery Levels: Not Supported 00:21:37.813 Endurance Groups: Not Supported 00:21:37.813 Predictable Latency Mode: Not Supported 00:21:37.813 Traffic Based Keep ALive: Supported 00:21:37.813 Namespace Granularity: Not Supported 00:21:37.813 SQ Associations: Not Supported 00:21:37.813 UUID List: Not Supported 00:21:37.813 Multi-Domain Subsystem: Not Supported 00:21:37.813 Fixed Capacity Management: Not Supported 00:21:37.813 Variable Capacity Management: Not Supported 00:21:37.813 Delete Endurance Group: Not Supported 00:21:37.813 Delete NVM Set: Not Supported 00:21:37.813 Extended LBA Formats Supported: Not Supported 00:21:37.813 Flexible Data Placement Supported: Not Supported 00:21:37.813 00:21:37.813 Controller Memory Buffer Support 00:21:37.813 ================================ 00:21:37.813 Supported: No 00:21:37.813 00:21:37.813 Persistent Memory Region Support 00:21:37.813 ================================ 00:21:37.813 Supported: No 00:21:37.813 00:21:37.813 Admin Command Set Attributes 00:21:37.813 ============================ 00:21:37.813 Security Send/Receive: Not Supported 00:21:37.813 Format NVM: Not Supported 00:21:37.813 Firmware Activate/Download: Not Supported 00:21:37.813 Namespace Management: Not Supported 00:21:37.813 Device Self-Test: Not Supported 00:21:37.813 Directives: Not Supported 00:21:37.813 NVMe-MI: Not Supported 00:21:37.813 Virtualization Management: Not Supported 00:21:37.813 Doorbell Buffer Config: Not Supported 00:21:37.813 Get LBA Status Capability: Not Supported 00:21:37.813 Command & Feature Lockdown Capability: Not Supported 00:21:37.813 Abort Command Limit: 4 00:21:37.813 Async Event Request Limit: 4 00:21:37.813 Number of Firmware Slots: N/A 00:21:37.813 Firmware Slot 1 Read-Only: N/A 00:21:37.813 Firmware Activation Without Reset: N/A 00:21:37.813 Multiple Update Detection Support: N/A 00:21:37.813 Firmware Update Granularity: No Information Provided 00:21:37.813 Per-Namespace SMART Log: Yes 00:21:37.813 Asymmetric Namespace Access Log Page: Supported 00:21:37.813 ANA Transition Time : 10 sec 00:21:37.813 00:21:37.813 Asymmetric Namespace Access Capabilities 00:21:37.813 ANA Optimized State : Supported 00:21:37.813 ANA Non-Optimized State : Supported 00:21:37.813 ANA Inaccessible State : Supported 00:21:37.813 ANA Persistent Loss State : Supported 00:21:37.813 ANA Change State : Supported 00:21:37.813 ANAGRPID is not changed : No 00:21:37.814 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:21:37.814 00:21:37.814 ANA Group Identifier Maximum : 128 00:21:37.814 Number of ANA Group Identifiers : 128 00:21:37.814 Max Number of Allowed Namespaces : 1024 00:21:37.814 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:21:37.814 Command Effects Log Page: Supported 00:21:37.814 Get Log Page Extended Data: Supported 00:21:37.814 Telemetry Log Pages: Not Supported 00:21:37.814 Persistent Event Log Pages: Not Supported 00:21:37.814 Supported Log Pages Log Page: May Support 00:21:37.814 Commands Supported & Effects Log Page: Not Supported 00:21:37.814 Feature Identifiers & Effects Log Page:May Support 00:21:37.814 NVMe-MI Commands & Effects Log Page: May Support 00:21:37.814 Data Area 4 for Telemetry Log: Not Supported 00:21:37.814 Error Log Page Entries Supported: 128 00:21:37.814 Keep Alive: Supported 00:21:37.814 Keep Alive Granularity: 1000 ms 00:21:37.814 00:21:37.814 NVM Command Set Attributes 00:21:37.814 ========================== 00:21:37.814 Submission Queue Entry Size 00:21:37.814 Max: 64 00:21:37.814 Min: 64 00:21:37.814 Completion Queue Entry Size 00:21:37.814 Max: 16 00:21:37.814 Min: 16 00:21:37.814 Number of Namespaces: 1024 00:21:37.814 Compare Command: Not Supported 00:21:37.814 Write Uncorrectable Command: Not Supported 00:21:37.814 Dataset Management Command: Supported 00:21:37.814 Write Zeroes Command: Supported 00:21:37.814 Set Features Save Field: Not Supported 00:21:37.814 Reservations: Not Supported 00:21:37.814 Timestamp: Not Supported 00:21:37.814 Copy: Not Supported 00:21:37.814 Volatile Write Cache: Present 00:21:37.814 Atomic Write Unit (Normal): 1 00:21:37.814 Atomic Write Unit (PFail): 1 00:21:37.814 Atomic Compare & Write Unit: 1 00:21:37.814 Fused Compare & Write: Not Supported 00:21:37.814 Scatter-Gather List 00:21:37.814 SGL Command Set: Supported 00:21:37.814 SGL Keyed: Not Supported 00:21:37.814 SGL Bit Bucket Descriptor: Not Supported 00:21:37.814 SGL Metadata Pointer: Not Supported 00:21:37.814 Oversized SGL: Not Supported 00:21:37.814 SGL Metadata Address: Not Supported 00:21:37.814 SGL Offset: Supported 00:21:37.814 Transport SGL Data Block: Not Supported 00:21:37.814 Replay Protected Memory Block: Not Supported 00:21:37.814 00:21:37.814 Firmware Slot Information 00:21:37.814 ========================= 00:21:37.814 Active slot: 0 00:21:37.814 00:21:37.814 Asymmetric Namespace Access 00:21:37.814 =========================== 00:21:37.814 Change Count : 0 00:21:37.814 Number of ANA Group Descriptors : 1 00:21:37.814 ANA Group Descriptor : 0 00:21:37.814 ANA Group ID : 1 00:21:37.814 Number of NSID Values : 1 00:21:37.814 Change Count : 0 00:21:37.814 ANA State : 1 00:21:37.814 Namespace Identifier : 1 00:21:37.814 00:21:37.814 Commands Supported and Effects 00:21:37.814 ============================== 00:21:37.814 Admin Commands 00:21:37.814 -------------- 00:21:37.814 Get Log Page (02h): Supported 00:21:37.814 Identify (06h): Supported 00:21:37.814 Abort (08h): Supported 00:21:37.814 Set Features (09h): Supported 00:21:37.814 Get Features (0Ah): Supported 00:21:37.814 Asynchronous Event Request (0Ch): Supported 00:21:37.814 Keep Alive (18h): Supported 00:21:37.814 I/O Commands 00:21:37.814 ------------ 00:21:37.814 Flush (00h): Supported 00:21:37.814 Write (01h): Supported LBA-Change 00:21:37.814 Read (02h): Supported 00:21:37.814 Write Zeroes (08h): Supported LBA-Change 00:21:37.814 Dataset Management (09h): Supported 00:21:37.814 00:21:37.814 Error Log 00:21:37.814 ========= 00:21:37.814 Entry: 0 00:21:37.814 Error Count: 0x3 00:21:37.814 Submission Queue Id: 0x0 00:21:37.814 Command Id: 0x5 00:21:37.814 Phase Bit: 0 00:21:37.814 Status Code: 0x2 00:21:37.814 Status Code Type: 0x0 00:21:37.814 Do Not Retry: 1 00:21:37.814 Error Location: 0x28 00:21:37.814 LBA: 0x0 00:21:37.814 Namespace: 0x0 00:21:37.814 Vendor Log Page: 0x0 00:21:37.814 ----------- 00:21:37.814 Entry: 1 00:21:37.814 Error Count: 0x2 00:21:37.814 Submission Queue Id: 0x0 00:21:37.814 Command Id: 0x5 00:21:37.814 Phase Bit: 0 00:21:37.814 Status Code: 0x2 00:21:37.814 Status Code Type: 0x0 00:21:37.814 Do Not Retry: 1 00:21:37.814 Error Location: 0x28 00:21:37.814 LBA: 0x0 00:21:37.814 Namespace: 0x0 00:21:37.814 Vendor Log Page: 0x0 00:21:37.814 ----------- 00:21:37.814 Entry: 2 00:21:37.814 Error Count: 0x1 00:21:37.814 Submission Queue Id: 0x0 00:21:37.814 Command Id: 0x4 00:21:37.814 Phase Bit: 0 00:21:37.814 Status Code: 0x2 00:21:37.814 Status Code Type: 0x0 00:21:37.814 Do Not Retry: 1 00:21:37.814 Error Location: 0x28 00:21:37.814 LBA: 0x0 00:21:37.814 Namespace: 0x0 00:21:37.814 Vendor Log Page: 0x0 00:21:37.814 00:21:37.814 Number of Queues 00:21:37.814 ================ 00:21:37.814 Number of I/O Submission Queues: 128 00:21:37.814 Number of I/O Completion Queues: 128 00:21:37.814 00:21:37.814 ZNS Specific Controller Data 00:21:37.814 ============================ 00:21:37.814 Zone Append Size Limit: 0 00:21:37.814 00:21:37.814 00:21:37.814 Active Namespaces 00:21:37.814 ================= 00:21:37.814 get_feature(0x05) failed 00:21:37.814 Namespace ID:1 00:21:37.814 Command Set Identifier: NVM (00h) 00:21:37.814 Deallocate: Supported 00:21:37.814 Deallocated/Unwritten Error: Not Supported 00:21:37.814 Deallocated Read Value: Unknown 00:21:37.814 Deallocate in Write Zeroes: Not Supported 00:21:37.814 Deallocated Guard Field: 0xFFFF 00:21:37.814 Flush: Supported 00:21:37.814 Reservation: Not Supported 00:21:37.814 Namespace Sharing Capabilities: Multiple Controllers 00:21:37.814 Size (in LBAs): 1310720 (5GiB) 00:21:37.814 Capacity (in LBAs): 1310720 (5GiB) 00:21:37.814 Utilization (in LBAs): 1310720 (5GiB) 00:21:37.814 UUID: 443f90fd-4ab1-4984-928e-76237281c7c2 00:21:37.814 Thin Provisioning: Not Supported 00:21:37.814 Per-NS Atomic Units: Yes 00:21:37.814 Atomic Boundary Size (Normal): 0 00:21:37.814 Atomic Boundary Size (PFail): 0 00:21:37.814 Atomic Boundary Offset: 0 00:21:37.814 NGUID/EUI64 Never Reused: No 00:21:37.814 ANA group ID: 1 00:21:37.814 Namespace Write Protected: No 00:21:37.814 Number of LBA Formats: 1 00:21:37.814 Current LBA Format: LBA Format #00 00:21:37.814 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:21:37.814 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:37.814 rmmod nvme_tcp 00:21:37.814 rmmod nvme_fabrics 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:37.814 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:38.073 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:38.073 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:38.073 14:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:38.073 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:38.331 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:38.896 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:38.896 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:38.896 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:38.896 00:21:38.896 real 0m2.585s 00:21:38.896 user 0m0.821s 00:21:38.896 sys 0m1.109s 00:21:38.896 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:38.896 14:47:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.896 ************************************ 00:21:38.896 END TEST nvmf_identify_kernel_target 00:21:38.896 ************************************ 00:21:38.896 14:47:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:38.896 14:47:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:38.896 14:47:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:38.896 14:47:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.896 ************************************ 00:21:38.896 START TEST nvmf_auth_host 00:21:38.896 ************************************ 00:21:38.896 14:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:39.155 * Looking for test storage... 00:21:39.155 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:39.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.155 --rc genhtml_branch_coverage=1 00:21:39.155 --rc genhtml_function_coverage=1 00:21:39.155 --rc genhtml_legend=1 00:21:39.155 --rc geninfo_all_blocks=1 00:21:39.155 --rc geninfo_unexecuted_blocks=1 00:21:39.155 00:21:39.155 ' 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:39.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.155 --rc genhtml_branch_coverage=1 00:21:39.155 --rc genhtml_function_coverage=1 00:21:39.155 --rc genhtml_legend=1 00:21:39.155 --rc geninfo_all_blocks=1 00:21:39.155 --rc geninfo_unexecuted_blocks=1 00:21:39.155 00:21:39.155 ' 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:39.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.155 --rc genhtml_branch_coverage=1 00:21:39.155 --rc genhtml_function_coverage=1 00:21:39.155 --rc genhtml_legend=1 00:21:39.155 --rc geninfo_all_blocks=1 00:21:39.155 --rc geninfo_unexecuted_blocks=1 00:21:39.155 00:21:39.155 ' 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:39.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.155 --rc genhtml_branch_coverage=1 00:21:39.155 --rc genhtml_function_coverage=1 00:21:39.155 --rc genhtml_legend=1 00:21:39.155 --rc geninfo_all_blocks=1 00:21:39.155 --rc geninfo_unexecuted_blocks=1 00:21:39.155 00:21:39.155 ' 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:39.155 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:39.156 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:39.156 Cannot find device "nvmf_init_br" 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:39.156 Cannot find device "nvmf_init_br2" 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:39.156 Cannot find device "nvmf_tgt_br" 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:39.156 Cannot find device "nvmf_tgt_br2" 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:39.156 Cannot find device "nvmf_init_br" 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:39.156 Cannot find device "nvmf_init_br2" 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:39.156 Cannot find device "nvmf_tgt_br" 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:39.156 Cannot find device "nvmf_tgt_br2" 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:39.156 Cannot find device "nvmf_br" 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:39.156 Cannot find device "nvmf_init_if" 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:39.156 Cannot find device "nvmf_init_if2" 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:39.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:39.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:39.156 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:39.415 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:39.415 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:21:39.415 00:21:39.415 --- 10.0.0.3 ping statistics --- 00:21:39.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.415 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:39.415 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:39.415 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.027 ms 00:21:39.415 00:21:39.415 --- 10.0.0.4 ping statistics --- 00:21:39.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.415 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:39.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:39.415 00:21:39.415 --- 10.0.0.1 ping statistics --- 00:21:39.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.415 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:39.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:21:39.415 00:21:39.415 --- 10.0.0.2 ping statistics --- 00:21:39.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.415 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.415 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:39.416 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:39.416 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.416 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:39.416 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:39.416 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:21:39.416 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:39.416 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:39.416 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.416 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=76832 00:21:39.416 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:21:39.416 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 76832 00:21:39.416 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 76832 ']' 00:21:39.416 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.416 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:39.416 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.416 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:39.416 14:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=267b549a91df0142eb334f73d284b156 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3ZR 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 267b549a91df0142eb334f73d284b156 0 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 267b549a91df0142eb334f73d284b156 0 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=267b549a91df0142eb334f73d284b156 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3ZR 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3ZR 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.3ZR 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ccae77631ceb928780ed15c53c85ff22394240bbfa2301628444b772dddb6887 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.aYH 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ccae77631ceb928780ed15c53c85ff22394240bbfa2301628444b772dddb6887 3 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ccae77631ceb928780ed15c53c85ff22394240bbfa2301628444b772dddb6887 3 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ccae77631ceb928780ed15c53c85ff22394240bbfa2301628444b772dddb6887 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.aYH 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.aYH 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.aYH 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=429afb04368ba23235e728eafac1ddbfb2da89a4c98a16fa 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Fqm 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 429afb04368ba23235e728eafac1ddbfb2da89a4c98a16fa 0 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 429afb04368ba23235e728eafac1ddbfb2da89a4c98a16fa 0 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=429afb04368ba23235e728eafac1ddbfb2da89a4c98a16fa 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:40.347 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Fqm 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Fqm 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Fqm 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=efdcd4213d503f352768953fea269f8d60506dedf7386704 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.0Ks 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key efdcd4213d503f352768953fea269f8d60506dedf7386704 2 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 efdcd4213d503f352768953fea269f8d60506dedf7386704 2 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=efdcd4213d503f352768953fea269f8d60506dedf7386704 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.0Ks 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.0Ks 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.0Ks 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4b688e030b1b5f17b3d6611fd0271e8d 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.LF8 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4b688e030b1b5f17b3d6611fd0271e8d 1 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4b688e030b1b5f17b3d6611fd0271e8d 1 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4b688e030b1b5f17b3d6611fd0271e8d 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.LF8 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.LF8 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.LF8 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=76ed769aa9cb8219f86cfafd1d43747d 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Zef 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 76ed769aa9cb8219f86cfafd1d43747d 1 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 76ed769aa9cb8219f86cfafd1d43747d 1 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=76ed769aa9cb8219f86cfafd1d43747d 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:21:40.606 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Zef 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Zef 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Zef 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ba4b8e6312119d07fdf76938ce80a278c92345e0de0647e9 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.qWo 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ba4b8e6312119d07fdf76938ce80a278c92345e0de0647e9 2 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ba4b8e6312119d07fdf76938ce80a278c92345e0de0647e9 2 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ba4b8e6312119d07fdf76938ce80a278c92345e0de0647e9 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.qWo 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.qWo 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.qWo 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d3baf2ba4693e41b27eedc25040e867e 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.SlD 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d3baf2ba4693e41b27eedc25040e867e 0 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d3baf2ba4693e41b27eedc25040e867e 0 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d3baf2ba4693e41b27eedc25040e867e 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:40.607 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:40.867 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.SlD 00:21:40.867 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.SlD 00:21:40.867 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.SlD 00:21:40.867 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:21:40.867 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:40.867 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:40.867 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:40.867 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:21:40.867 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:21:40.867 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:40.867 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cda996eb33222a9dc38fc8167dfa309ab87625c695bece6bebb55c79ae3e0cfa 00:21:40.867 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:40.867 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hJ6 00:21:40.868 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cda996eb33222a9dc38fc8167dfa309ab87625c695bece6bebb55c79ae3e0cfa 3 00:21:40.868 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cda996eb33222a9dc38fc8167dfa309ab87625c695bece6bebb55c79ae3e0cfa 3 00:21:40.868 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.868 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:40.868 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cda996eb33222a9dc38fc8167dfa309ab87625c695bece6bebb55c79ae3e0cfa 00:21:40.868 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:21:40.868 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:40.868 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hJ6 00:21:40.868 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hJ6 00:21:40.868 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.hJ6 00:21:40.868 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:21:40.868 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 76832 00:21:40.868 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 76832 ']' 00:21:40.868 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.868 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:40.868 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.868 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:40.868 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3ZR 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.aYH ]] 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aYH 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Fqm 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.0Ks ]] 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0Ks 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.LF8 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Zef ]] 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Zef 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.qWo 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.SlD ]] 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.SlD 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.hJ6 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:41.126 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:41.383 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:41.383 Waiting for block devices as requested 00:21:41.383 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:41.383 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:41.950 No valid GPT data, bailing 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:41.950 No valid GPT data, bailing 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:41.950 14:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:41.950 No valid GPT data, bailing 00:21:41.950 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:41.950 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:41.950 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:41.950 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:41.950 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:41.950 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:41.950 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:41.950 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:21:41.950 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:41.950 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:41.950 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:41.950 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:41.950 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:41.950 No valid GPT data, bailing 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid=0c7d476c-d4d7-4594-a48a-578d93697ffa -a 10.0.0.1 -t tcp -s 4420 00:21:42.209 00:21:42.209 Discovery Log Number of Records 2, Generation counter 2 00:21:42.209 =====Discovery Log Entry 0====== 00:21:42.209 trtype: tcp 00:21:42.209 adrfam: ipv4 00:21:42.209 subtype: current discovery subsystem 00:21:42.209 treq: not specified, sq flow control disable supported 00:21:42.209 portid: 1 00:21:42.209 trsvcid: 4420 00:21:42.209 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:42.209 traddr: 10.0.0.1 00:21:42.209 eflags: none 00:21:42.209 sectype: none 00:21:42.209 =====Discovery Log Entry 1====== 00:21:42.209 trtype: tcp 00:21:42.209 adrfam: ipv4 00:21:42.209 subtype: nvme subsystem 00:21:42.209 treq: not specified, sq flow control disable supported 00:21:42.209 portid: 1 00:21:42.209 trsvcid: 4420 00:21:42.209 subnqn: nqn.2024-02.io.spdk:cnode0 00:21:42.209 traddr: 10.0.0.1 00:21:42.209 eflags: none 00:21:42.209 sectype: none 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: ]] 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.209 nvme0n1 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.209 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: ]] 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.467 nvme0n1 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.467 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: ]] 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.468 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.725 nvme0n1 00:21:42.725 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.725 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:42.725 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:42.725 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.725 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.725 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: ]] 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.726 nvme0n1 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: ]] 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:42.726 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:42.727 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:42.727 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:42.727 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:42.727 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:42.727 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:42.727 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:42.727 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:42.727 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:42.727 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.727 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.984 nvme0n1 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.984 14:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.984 nvme0n1 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:42.984 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.242 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.242 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.242 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.242 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.242 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.242 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:43.242 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:43.242 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:21:43.242 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.242 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:43.242 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:43.242 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:43.242 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:43.242 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:43.242 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:43.242 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: ]] 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.500 nvme0n1 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: ]] 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:43.500 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.501 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.501 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:43.501 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.501 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:43.501 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:43.501 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:43.501 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.501 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.501 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.758 nvme0n1 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: ]] 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.758 nvme0n1 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.758 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.759 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.759 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.759 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: ]] 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.018 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.018 nvme0n1 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:44.018 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:44.019 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:44.019 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:44.019 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:44.019 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:44.019 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.019 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.279 nvme0n1 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:44.279 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: ]] 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.845 nvme0n1 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:44.845 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: ]] 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:44.846 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:45.105 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.105 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.105 14:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.105 nvme0n1 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: ]] 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.105 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.364 nvme0n1 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: ]] 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.364 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.622 nvme0n1 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:45.622 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.623 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.881 nvme0n1 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:45.881 14:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: ]] 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.782 nvme0n1 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: ]] 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:47.782 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:47.783 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:47.783 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:47.783 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:47.783 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:47.783 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:47.783 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:47.783 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:47.783 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:47.783 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:47.783 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.783 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.783 14:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.040 nvme0n1 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:48.040 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: ]] 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.041 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.306 nvme0n1 00:21:48.306 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.306 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:48.306 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:48.306 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.306 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.306 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.306 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.306 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:48.306 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.306 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: ]] 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.564 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.822 nvme0n1 00:21:48.822 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.822 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:48.822 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:48.822 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.822 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.822 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.822 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.822 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:48.822 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.823 14:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.081 nvme0n1 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: ]] 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.081 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.647 nvme0n1 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: ]] 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.647 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.648 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:49.648 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:49.648 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:49.648 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:49.648 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:49.648 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:49.648 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:49.648 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:49.648 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:49.648 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:49.648 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:49.648 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.648 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.648 14:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.213 nvme0n1 00:21:50.213 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.213 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:50.213 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.213 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.213 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:50.213 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.213 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: ]] 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.214 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.778 nvme0n1 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: ]] 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.778 14:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.343 nvme0n1 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.343 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.909 nvme0n1 00:21:51.909 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.909 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:51.909 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.909 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:51.909 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.909 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.909 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.909 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.909 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.909 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.909 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.909 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:51.909 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:51.909 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:51.909 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:21:51.909 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:51.909 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:51.909 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:51.909 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: ]] 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.910 14:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.168 nvme0n1 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: ]] 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.168 nvme0n1 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:52.168 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: ]] 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.169 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.427 nvme0n1 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: ]] 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.427 nvme0n1 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.427 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.685 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.685 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:52.685 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:52.685 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:52.685 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:52.685 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:52.685 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:52.685 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:52.685 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:52.685 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:52.685 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:52.685 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:52.685 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:52.685 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.685 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.685 nvme0n1 00:21:52.685 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: ]] 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.686 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.968 nvme0n1 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: ]] 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.968 14:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.968 nvme0n1 00:21:52.968 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.968 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:52.968 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.968 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.968 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:52.968 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.968 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.968 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.968 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.968 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.968 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.968 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:52.968 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:21:52.968 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:52.968 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:52.968 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:52.968 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:52.968 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: ]] 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.969 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.227 nvme0n1 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: ]] 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.227 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.485 nvme0n1 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:53.485 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.486 nvme0n1 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: ]] 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.486 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.744 nvme0n1 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: ]] 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.744 14:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.001 nvme0n1 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: ]] 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:54.001 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:54.002 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:54.002 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.002 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.002 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.002 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:54.002 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:54.002 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:54.002 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:54.002 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:54.002 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:54.002 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:54.002 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:54.002 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:54.002 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:54.002 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:54.002 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.002 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.002 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.259 nvme0n1 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: ]] 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.259 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.522 nvme0n1 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.522 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.779 nvme0n1 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: ]] 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:54.779 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:54.780 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:54.780 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:54.780 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:54.780 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.780 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.780 14:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.345 nvme0n1 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: ]] 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.345 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.604 nvme0n1 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: ]] 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.604 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.605 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.605 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:55.605 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:55.605 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:55.605 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:55.605 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:55.605 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:55.605 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:55.605 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:55.605 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:55.605 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:55.605 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:55.605 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.605 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.605 14:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.214 nvme0n1 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: ]] 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.214 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.473 nvme0n1 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:56.473 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:56.474 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.474 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.731 nvme0n1 00:21:56.731 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.731 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:56.731 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:56.731 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.731 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.731 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: ]] 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.988 14:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.552 nvme0n1 00:21:57.552 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.552 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:57.552 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:57.552 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.552 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.552 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: ]] 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.553 14:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.118 nvme0n1 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: ]] 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.118 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.682 nvme0n1 00:21:58.682 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.682 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.682 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.682 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.682 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.682 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.682 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: ]] 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.683 14:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.248 nvme0n1 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.248 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.831 nvme0n1 00:21:59.831 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.831 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:59.831 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:59.831 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.831 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.831 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.831 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.831 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: ]] 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.832 nvme0n1 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.832 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.091 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: ]] 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.092 14:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.092 nvme0n1 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: ]] 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.092 nvme0n1 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.092 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.351 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.351 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: ]] 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.352 nvme0n1 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.352 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.611 nvme0n1 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: ]] 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.611 nvme0n1 00:22:00.611 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: ]] 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.612 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.870 nvme0n1 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: ]] 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.871 14:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.871 nvme0n1 00:22:00.871 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: ]] 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.129 nvme0n1 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.129 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.386 nvme0n1 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:22:01.386 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: ]] 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.387 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.644 nvme0n1 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: ]] 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.644 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.645 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:01.645 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:01.645 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:01.645 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:01.645 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:01.645 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:01.645 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:01.645 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:01.645 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:01.645 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:01.645 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:01.645 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.645 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.645 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.645 nvme0n1 00:22:01.645 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.645 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.645 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:01.645 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.645 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: ]] 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.903 nvme0n1 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.903 14:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: ]] 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.903 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.162 nvme0n1 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.162 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.420 nvme0n1 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: ]] 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.420 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.678 nvme0n1 00:22:02.678 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.678 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:02.678 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:02.678 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.678 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.679 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.679 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.679 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.679 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.679 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: ]] 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.936 14:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.194 nvme0n1 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: ]] 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.194 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.452 nvme0n1 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: ]] 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.452 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.709 nvme0n1 00:22:03.709 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.709 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:03.709 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:03.709 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.709 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.966 14:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.223 nvme0n1 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY3YjU0OWE5MWRmMDE0MmViMzM0ZjczZDI4NGIxNTYgYoKv: 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: ]] 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhZTc3NjMxY2ViOTI4NzgwZWQxNWM1M2M4NWZmMjIzOTQyNDBiYmZhMjMwMTYyODQ0NGI3NzJkZGRiNjg4N16vhjE=: 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:04.223 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:04.224 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:04.224 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:04.224 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:04.224 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.224 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.224 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.787 nvme0n1 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: ]] 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.787 14:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.351 nvme0n1 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: ]] 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.351 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.913 nvme0n1 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE0YjhlNjMxMjExOWQwN2ZkZjc2OTM4Y2U4MGEyNzhjOTIzNDVlMGRlMDY0N2U5YU6B1g==: 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: ]] 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDNiYWYyYmE0NjkzZTQxYjI3ZWVkYzI1MDQwZTg2N2V5kD8n: 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.913 14:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.477 nvme0n1 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhOTk2ZWIzMzIyMmE5ZGMzOGZjODE2N2RmYTMwOWFiODc2MjVjNjk1YmVjZTZiZWJiNTVjNzlhZTNlMGNmYYrjnjI=: 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.477 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.478 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:06.478 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:06.478 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:06.478 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:06.478 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.478 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.478 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:06.478 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:06.478 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:06.478 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:06.478 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:06.478 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:06.478 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.478 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.043 nvme0n1 00:22:07.043 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.043 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.044 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:07.044 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.044 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.044 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.044 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.044 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.044 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.044 14:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: ]] 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.044 request: 00:22:07.044 { 00:22:07.044 "name": "nvme0", 00:22:07.044 "trtype": "tcp", 00:22:07.044 "traddr": "10.0.0.1", 00:22:07.044 "adrfam": "ipv4", 00:22:07.044 "trsvcid": "4420", 00:22:07.044 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:07.044 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:07.044 "prchk_reftag": false, 00:22:07.044 "prchk_guard": false, 00:22:07.044 "hdgst": false, 00:22:07.044 "ddgst": false, 00:22:07.044 "allow_unrecognized_csi": false, 00:22:07.044 "method": "bdev_nvme_attach_controller", 00:22:07.044 "req_id": 1 00:22:07.044 } 00:22:07.044 Got JSON-RPC error response 00:22:07.044 response: 00:22:07.044 { 00:22:07.044 "code": -5, 00:22:07.044 "message": "Input/output error" 00:22:07.044 } 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.044 request: 00:22:07.044 { 00:22:07.044 "name": "nvme0", 00:22:07.044 "trtype": "tcp", 00:22:07.044 "traddr": "10.0.0.1", 00:22:07.044 "adrfam": "ipv4", 00:22:07.044 "trsvcid": "4420", 00:22:07.044 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:07.044 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:07.044 "prchk_reftag": false, 00:22:07.044 "prchk_guard": false, 00:22:07.044 "hdgst": false, 00:22:07.044 "ddgst": false, 00:22:07.044 "dhchap_key": "key2", 00:22:07.044 "allow_unrecognized_csi": false, 00:22:07.044 "method": "bdev_nvme_attach_controller", 00:22:07.044 "req_id": 1 00:22:07.044 } 00:22:07.044 Got JSON-RPC error response 00:22:07.044 response: 00:22:07.044 { 00:22:07.044 "code": -5, 00:22:07.044 "message": "Input/output error" 00:22:07.044 } 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:07.044 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.045 request: 00:22:07.045 { 00:22:07.045 "name": "nvme0", 00:22:07.045 "trtype": "tcp", 00:22:07.045 "traddr": "10.0.0.1", 00:22:07.045 "adrfam": "ipv4", 00:22:07.045 "trsvcid": "4420", 00:22:07.045 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:07.045 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:07.045 "prchk_reftag": false, 00:22:07.045 "prchk_guard": false, 00:22:07.045 "hdgst": false, 00:22:07.045 "ddgst": false, 00:22:07.045 "dhchap_key": "key1", 00:22:07.045 "dhchap_ctrlr_key": "ckey2", 00:22:07.045 "allow_unrecognized_csi": false, 00:22:07.045 "method": "bdev_nvme_attach_controller", 00:22:07.045 "req_id": 1 00:22:07.045 } 00:22:07.045 Got JSON-RPC error response 00:22:07.045 response: 00:22:07.045 { 00:22:07.045 "code": -5, 00:22:07.045 "message": "Input/output error" 00:22:07.045 } 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:07.045 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:22:07.303 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:07.303 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:07.303 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:07.303 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:07.303 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:07.303 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.304 nvme0n1 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: ]] 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.304 request: 00:22:07.304 { 00:22:07.304 "name": "nvme0", 00:22:07.304 "dhchap_key": "key1", 00:22:07.304 "dhchap_ctrlr_key": "ckey2", 00:22:07.304 "method": "bdev_nvme_set_keys", 00:22:07.304 "req_id": 1 00:22:07.304 } 00:22:07.304 Got JSON-RPC error response 00:22:07.304 response: 00:22:07.304 { 00:22:07.304 "code": -5, 00:22:07.304 "message": "Input/output error" 00:22:07.304 } 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:22:07.304 14:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:22:08.239 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.239 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:22:08.497 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.497 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.497 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.497 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:22:08.497 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:08.497 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.497 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:08.497 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:08.497 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:08.497 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:22:08.497 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:22:08.497 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:08.497 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5YWZiMDQzNjhiYTIzMjM1ZTcyOGVhZmFjMWRkYmZiMmRhODlhNGM5OGExNmZh0U0W2g==: 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: ]] 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWZkY2Q0MjEzZDUwM2YzNTI3Njg5NTNmZWEyNjlmOGQ2MDUwNmRlZGY3Mzg2NzA0udK5Yg==: 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.498 nvme0n1 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI2ODhlMDMwYjFiNWYxN2IzZDY2MTFmZDAyNzFlOGRt3GMb: 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: ]] 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZlZDc2OWFhOWNiODIxOWY4NmNmYWZkMWQ0Mzc0N2ShvnHC: 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.498 request: 00:22:08.498 { 00:22:08.498 "name": "nvme0", 00:22:08.498 "dhchap_key": "key2", 00:22:08.498 "dhchap_ctrlr_key": "ckey1", 00:22:08.498 "method": "bdev_nvme_set_keys", 00:22:08.498 "req_id": 1 00:22:08.498 } 00:22:08.498 Got JSON-RPC error response 00:22:08.498 response: 00:22:08.498 { 00:22:08.498 "code": -13, 00:22:08.498 "message": "Permission denied" 00:22:08.498 } 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:22:08.498 14:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:22:09.430 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:22:09.430 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.430 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.430 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.687 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.687 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:22:09.687 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:22:09.687 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:22:09.687 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:22:09.687 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:09.687 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:22:09.687 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:09.687 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:22:09.687 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:09.687 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:09.687 rmmod nvme_tcp 00:22:09.687 rmmod nvme_fabrics 00:22:09.687 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:09.687 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:22:09.687 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:22:09.687 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 76832 ']' 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 76832 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 76832 ']' 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 76832 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76832 00:22:09.688 killing process with pid 76832 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76832' 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 76832 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 76832 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:09.688 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:22:09.945 14:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:22:09.945 14:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:10.526 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:10.526 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:10.796 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:10.796 14:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.3ZR /tmp/spdk.key-null.Fqm /tmp/spdk.key-sha256.LF8 /tmp/spdk.key-sha384.qWo /tmp/spdk.key-sha512.hJ6 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:22:10.796 14:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:11.053 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:11.053 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:11.053 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:11.053 00:22:11.053 real 0m32.078s 00:22:11.053 user 0m28.576s 00:22:11.053 sys 0m2.886s 00:22:11.053 14:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:11.053 ************************************ 00:22:11.053 END TEST nvmf_auth_host 00:22:11.053 ************************************ 00:22:11.053 14:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.053 14:48:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:22:11.053 14:48:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:11.053 14:48:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:11.053 14:48:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:11.053 14:48:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.053 ************************************ 00:22:11.053 START TEST nvmf_digest 00:22:11.053 ************************************ 00:22:11.053 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:11.053 * Looking for test storage... 00:22:11.053 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:11.053 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:11.053 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:22:11.053 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:11.311 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:11.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.312 --rc genhtml_branch_coverage=1 00:22:11.312 --rc genhtml_function_coverage=1 00:22:11.312 --rc genhtml_legend=1 00:22:11.312 --rc geninfo_all_blocks=1 00:22:11.312 --rc geninfo_unexecuted_blocks=1 00:22:11.312 00:22:11.312 ' 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:11.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.312 --rc genhtml_branch_coverage=1 00:22:11.312 --rc genhtml_function_coverage=1 00:22:11.312 --rc genhtml_legend=1 00:22:11.312 --rc geninfo_all_blocks=1 00:22:11.312 --rc geninfo_unexecuted_blocks=1 00:22:11.312 00:22:11.312 ' 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:11.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.312 --rc genhtml_branch_coverage=1 00:22:11.312 --rc genhtml_function_coverage=1 00:22:11.312 --rc genhtml_legend=1 00:22:11.312 --rc geninfo_all_blocks=1 00:22:11.312 --rc geninfo_unexecuted_blocks=1 00:22:11.312 00:22:11.312 ' 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:11.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.312 --rc genhtml_branch_coverage=1 00:22:11.312 --rc genhtml_function_coverage=1 00:22:11.312 --rc genhtml_legend=1 00:22:11.312 --rc geninfo_all_blocks=1 00:22:11.312 --rc geninfo_unexecuted_blocks=1 00:22:11.312 00:22:11.312 ' 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:11.312 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.312 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:11.313 Cannot find device "nvmf_init_br" 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:11.313 Cannot find device "nvmf_init_br2" 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:11.313 Cannot find device "nvmf_tgt_br" 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:11.313 Cannot find device "nvmf_tgt_br2" 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:11.313 Cannot find device "nvmf_init_br" 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:11.313 Cannot find device "nvmf_init_br2" 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:11.313 Cannot find device "nvmf_tgt_br" 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:11.313 Cannot find device "nvmf_tgt_br2" 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:11.313 Cannot find device "nvmf_br" 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:11.313 Cannot find device "nvmf_init_if" 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:11.313 Cannot find device "nvmf_init_if2" 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:11.313 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:11.313 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:11.313 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:11.595 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:11.596 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:11.596 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:22:11.596 00:22:11.596 --- 10.0.0.3 ping statistics --- 00:22:11.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.596 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:11.596 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:11.596 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:22:11.596 00:22:11.596 --- 10.0.0.4 ping statistics --- 00:22:11.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.596 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:11.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:22:11.596 00:22:11.596 --- 10.0.0.1 ping statistics --- 00:22:11.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.596 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:11.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:22:11.596 00:22:11.596 --- 10.0.0.2 ping statistics --- 00:22:11.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.596 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:11.596 ************************************ 00:22:11.596 START TEST nvmf_digest_clean 00:22:11.596 ************************************ 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=78430 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 78430 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 78430 ']' 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:11.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:11.596 14:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:11.596 [2024-11-04 14:48:20.615749] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:22:11.596 [2024-11-04 14:48:20.615804] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.854 [2024-11-04 14:48:20.754488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.854 [2024-11-04 14:48:20.788804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.854 [2024-11-04 14:48:20.788855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.854 [2024-11-04 14:48:20.788861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.854 [2024-11-04 14:48:20.788866] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.854 [2024-11-04 14:48:20.788870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.854 [2024-11-04 14:48:20.789123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.419 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:12.419 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:22:12.419 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:12.419 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:12.419 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:12.419 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.419 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:22:12.420 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:22:12.420 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:22:12.420 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.420 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:12.420 [2024-11-04 14:48:21.555463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:12.678 null0 00:22:12.678 [2024-11-04 14:48:21.595389] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.678 [2024-11-04 14:48:21.619457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:12.678 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.678 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:22:12.678 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:12.678 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:12.678 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:12.678 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:22:12.678 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:22:12.678 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:12.678 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=78462 00:22:12.678 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 78462 /var/tmp/bperf.sock 00:22:12.678 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 78462 ']' 00:22:12.678 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:12.678 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:12.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:12.678 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:12.678 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:12.678 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:12.678 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:12.678 [2024-11-04 14:48:21.659727] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:22:12.678 [2024-11-04 14:48:21.659784] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78462 ] 00:22:12.678 [2024-11-04 14:48:21.795638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.935 [2024-11-04 14:48:21.831399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.499 14:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:13.499 14:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:22:13.499 14:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:13.499 14:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:13.499 14:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:13.756 [2024-11-04 14:48:22.709073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:13.756 14:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:13.756 14:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:14.013 nvme0n1 00:22:14.013 14:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:14.013 14:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:14.013 Running I/O for 2 seconds... 00:22:16.326 15748.00 IOPS, 61.52 MiB/s [2024-11-04T14:48:25.466Z] 17018.00 IOPS, 66.48 MiB/s 00:22:16.326 Latency(us) 00:22:16.326 [2024-11-04T14:48:25.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.326 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:16.326 nvme0n1 : 2.00 17051.72 66.61 0.00 0.00 7503.79 6301.54 19963.27 00:22:16.326 [2024-11-04T14:48:25.466Z] =================================================================================================================== 00:22:16.326 [2024-11-04T14:48:25.466Z] Total : 17051.72 66.61 0.00 0.00 7503.79 6301.54 19963.27 00:22:16.326 { 00:22:16.326 "results": [ 00:22:16.326 { 00:22:16.326 "job": "nvme0n1", 00:22:16.326 "core_mask": "0x2", 00:22:16.326 "workload": "randread", 00:22:16.326 "status": "finished", 00:22:16.326 "queue_depth": 128, 00:22:16.326 "io_size": 4096, 00:22:16.326 "runtime": 2.003552, 00:22:16.326 "iops": 17051.716152113844, 00:22:16.326 "mibps": 66.6082662191947, 00:22:16.326 "io_failed": 0, 00:22:16.326 "io_timeout": 0, 00:22:16.326 "avg_latency_us": 7503.790989705763, 00:22:16.326 "min_latency_us": 6301.538461538462, 00:22:16.326 "max_latency_us": 19963.273846153847 00:22:16.326 } 00:22:16.326 ], 00:22:16.326 "core_count": 1 00:22:16.326 } 00:22:16.326 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:16.326 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:16.326 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:16.327 | select(.opcode=="crc32c") 00:22:16.327 | "\(.module_name) \(.executed)"' 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 78462 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 78462 ']' 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 78462 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78462 00:22:16.327 killing process with pid 78462 00:22:16.327 Received shutdown signal, test time was about 2.000000 seconds 00:22:16.327 00:22:16.327 Latency(us) 00:22:16.327 [2024-11-04T14:48:25.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.327 [2024-11-04T14:48:25.467Z] =================================================================================================================== 00:22:16.327 [2024-11-04T14:48:25.467Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78462' 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 78462 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 78462 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=78522 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 78522 /var/tmp/bperf.sock 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 78522 ']' 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:16.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:16.327 14:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:16.327 [2024-11-04 14:48:25.451675] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:22:16.327 [2024-11-04 14:48:25.451911] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78522 ] 00:22:16.327 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:16.327 Zero copy mechanism will not be used. 00:22:16.585 [2024-11-04 14:48:25.588635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.585 [2024-11-04 14:48:25.618621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.149 14:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:17.149 14:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:22:17.149 14:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:17.149 14:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:17.149 14:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:17.407 [2024-11-04 14:48:26.494168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:17.407 14:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:17.407 14:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:17.664 nvme0n1 00:22:17.665 14:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:17.665 14:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:17.922 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:17.922 Zero copy mechanism will not be used. 00:22:17.922 Running I/O for 2 seconds... 00:22:19.789 11616.00 IOPS, 1452.00 MiB/s [2024-11-04T14:48:28.929Z] 11664.00 IOPS, 1458.00 MiB/s 00:22:19.789 Latency(us) 00:22:19.789 [2024-11-04T14:48:28.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.789 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:19.789 nvme0n1 : 2.00 11660.76 1457.59 0.00 0.00 1369.72 1279.21 6024.27 00:22:19.789 [2024-11-04T14:48:28.929Z] =================================================================================================================== 00:22:19.789 [2024-11-04T14:48:28.929Z] Total : 11660.76 1457.59 0.00 0.00 1369.72 1279.21 6024.27 00:22:19.789 { 00:22:19.789 "results": [ 00:22:19.789 { 00:22:19.789 "job": "nvme0n1", 00:22:19.789 "core_mask": "0x2", 00:22:19.789 "workload": "randread", 00:22:19.789 "status": "finished", 00:22:19.789 "queue_depth": 16, 00:22:19.789 "io_size": 131072, 00:22:19.789 "runtime": 2.001928, 00:22:19.789 "iops": 11660.759028296721, 00:22:19.789 "mibps": 1457.5948785370902, 00:22:19.789 "io_failed": 0, 00:22:19.789 "io_timeout": 0, 00:22:19.789 "avg_latency_us": 1369.7191796277746, 00:22:19.789 "min_latency_us": 1279.2123076923076, 00:22:19.789 "max_latency_us": 6024.2707692307695 00:22:19.789 } 00:22:19.789 ], 00:22:19.789 "core_count": 1 00:22:19.789 } 00:22:19.789 14:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:19.789 14:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:19.789 14:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:19.789 | select(.opcode=="crc32c") 00:22:19.789 | "\(.module_name) \(.executed)"' 00:22:19.789 14:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:19.789 14:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:20.047 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:20.047 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:20.047 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:20.047 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:20.047 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 78522 00:22:20.047 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 78522 ']' 00:22:20.047 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 78522 00:22:20.047 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:22:20.047 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:20.047 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78522 00:22:20.047 killing process with pid 78522 00:22:20.047 Received shutdown signal, test time was about 2.000000 seconds 00:22:20.047 00:22:20.047 Latency(us) 00:22:20.047 [2024-11-04T14:48:29.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.047 [2024-11-04T14:48:29.187Z] =================================================================================================================== 00:22:20.047 [2024-11-04T14:48:29.187Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:20.047 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:20.047 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:20.047 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78522' 00:22:20.047 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 78522 00:22:20.047 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 78522 00:22:20.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:20.306 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:22:20.306 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:20.306 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:20.306 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:22:20.306 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:22:20.306 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:22:20.306 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:20.306 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=78577 00:22:20.306 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 78577 /var/tmp/bperf.sock 00:22:20.306 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 78577 ']' 00:22:20.306 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:20.306 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:20.306 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:20.306 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:20.306 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:20.306 14:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:20.306 [2024-11-04 14:48:29.241099] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:22:20.306 [2024-11-04 14:48:29.241308] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78577 ] 00:22:20.306 [2024-11-04 14:48:29.373775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.306 [2024-11-04 14:48:29.403886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.239 14:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:21.239 14:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:22:21.239 14:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:21.239 14:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:21.239 14:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:21.239 [2024-11-04 14:48:30.287495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:21.239 14:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:21.240 14:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:21.806 nvme0n1 00:22:21.806 14:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:21.806 14:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:21.806 Running I/O for 2 seconds... 00:22:23.674 21210.00 IOPS, 82.85 MiB/s [2024-11-04T14:48:32.814Z] 21273.00 IOPS, 83.10 MiB/s 00:22:23.674 Latency(us) 00:22:23.674 [2024-11-04T14:48:32.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.674 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:23.674 nvme0n1 : 2.01 21315.03 83.26 0.00 0.00 6000.88 5646.18 13208.02 00:22:23.674 [2024-11-04T14:48:32.814Z] =================================================================================================================== 00:22:23.674 [2024-11-04T14:48:32.814Z] Total : 21315.03 83.26 0.00 0.00 6000.88 5646.18 13208.02 00:22:23.674 { 00:22:23.674 "results": [ 00:22:23.674 { 00:22:23.674 "job": "nvme0n1", 00:22:23.674 "core_mask": "0x2", 00:22:23.674 "workload": "randwrite", 00:22:23.674 "status": "finished", 00:22:23.674 "queue_depth": 128, 00:22:23.674 "io_size": 4096, 00:22:23.674 "runtime": 2.00802, 00:22:23.674 "iops": 21315.026742761525, 00:22:23.674 "mibps": 83.26182321391221, 00:22:23.674 "io_failed": 0, 00:22:23.674 "io_timeout": 0, 00:22:23.674 "avg_latency_us": 6000.876412592805, 00:22:23.674 "min_latency_us": 5646.178461538461, 00:22:23.674 "max_latency_us": 13208.024615384615 00:22:23.674 } 00:22:23.674 ], 00:22:23.674 "core_count": 1 00:22:23.674 } 00:22:23.674 14:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:23.674 14:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:23.674 14:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:23.674 14:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:23.674 | select(.opcode=="crc32c") 00:22:23.674 | "\(.module_name) \(.executed)"' 00:22:23.674 14:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:23.932 14:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:23.932 14:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:23.932 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:23.932 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:23.932 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 78577 00:22:23.932 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 78577 ']' 00:22:23.932 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 78577 00:22:23.932 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:22:23.932 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:23.932 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78577 00:22:23.932 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:23.932 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:23.932 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78577' 00:22:23.932 killing process with pid 78577 00:22:23.932 Received shutdown signal, test time was about 2.000000 seconds 00:22:23.932 00:22:23.932 Latency(us) 00:22:23.932 [2024-11-04T14:48:33.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.932 [2024-11-04T14:48:33.072Z] =================================================================================================================== 00:22:23.932 [2024-11-04T14:48:33.072Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.932 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 78577 00:22:23.932 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 78577 00:22:24.190 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:22:24.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:24.190 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:24.190 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:24.190 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:22:24.190 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:22:24.190 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:22:24.190 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:24.190 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=78638 00:22:24.190 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:24.190 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 78638 /var/tmp/bperf.sock 00:22:24.190 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 78638 ']' 00:22:24.190 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:24.190 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:24.190 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:24.190 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:24.190 14:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:24.190 [2024-11-04 14:48:33.155231] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:22:24.190 [2024-11-04 14:48:33.155374] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78638 ] 00:22:24.190 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:24.190 Zero copy mechanism will not be used. 00:22:24.190 [2024-11-04 14:48:33.287114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.190 [2024-11-04 14:48:33.316959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.121 14:48:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:25.121 14:48:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:22:25.121 14:48:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:25.121 14:48:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:25.121 14:48:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:25.121 [2024-11-04 14:48:34.236649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:25.379 14:48:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:25.379 14:48:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:25.636 nvme0n1 00:22:25.636 14:48:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:25.636 14:48:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:25.636 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:25.636 Zero copy mechanism will not be used. 00:22:25.636 Running I/O for 2 seconds... 00:22:27.502 8989.00 IOPS, 1123.62 MiB/s [2024-11-04T14:48:36.642Z] 9850.50 IOPS, 1231.31 MiB/s 00:22:27.502 Latency(us) 00:22:27.502 [2024-11-04T14:48:36.642Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.502 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:27.502 nvme0n1 : 2.00 9848.67 1231.08 0.00 0.00 1621.67 1348.53 10788.23 00:22:27.502 [2024-11-04T14:48:36.642Z] =================================================================================================================== 00:22:27.502 [2024-11-04T14:48:36.642Z] Total : 9848.67 1231.08 0.00 0.00 1621.67 1348.53 10788.23 00:22:27.502 { 00:22:27.502 "results": [ 00:22:27.502 { 00:22:27.502 "job": "nvme0n1", 00:22:27.502 "core_mask": "0x2", 00:22:27.502 "workload": "randwrite", 00:22:27.502 "status": "finished", 00:22:27.502 "queue_depth": 16, 00:22:27.502 "io_size": 131072, 00:22:27.502 "runtime": 2.001996, 00:22:27.502 "iops": 9848.671026315737, 00:22:27.502 "mibps": 1231.083878289467, 00:22:27.502 "io_failed": 0, 00:22:27.502 "io_timeout": 0, 00:22:27.502 "avg_latency_us": 1621.671695725282, 00:22:27.502 "min_latency_us": 1348.5292307692307, 00:22:27.502 "max_latency_us": 10788.233846153846 00:22:27.502 } 00:22:27.502 ], 00:22:27.502 "core_count": 1 00:22:27.502 } 00:22:27.760 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:27.760 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:27.760 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:27.760 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:27.760 | select(.opcode=="crc32c") 00:22:27.760 | "\(.module_name) \(.executed)"' 00:22:27.760 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:27.760 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:27.760 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:27.760 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:27.760 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:27.761 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 78638 00:22:27.761 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 78638 ']' 00:22:27.761 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 78638 00:22:27.761 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:22:27.761 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:27.761 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78638 00:22:27.761 killing process with pid 78638 00:22:27.761 Received shutdown signal, test time was about 2.000000 seconds 00:22:27.761 00:22:27.761 Latency(us) 00:22:27.761 [2024-11-04T14:48:36.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.761 [2024-11-04T14:48:36.901Z] =================================================================================================================== 00:22:27.761 [2024-11-04T14:48:36.901Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:27.761 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:27.761 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:27.761 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78638' 00:22:27.761 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 78638 00:22:27.761 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 78638 00:22:28.019 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 78430 00:22:28.019 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 78430 ']' 00:22:28.019 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 78430 00:22:28.019 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:22:28.019 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:28.019 14:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78430 00:22:28.019 killing process with pid 78430 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78430' 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 78430 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 78430 00:22:28.019 ************************************ 00:22:28.019 END TEST nvmf_digest_clean 00:22:28.019 ************************************ 00:22:28.019 00:22:28.019 real 0m16.529s 00:22:28.019 user 0m32.209s 00:22:28.019 sys 0m3.465s 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:28.019 ************************************ 00:22:28.019 START TEST nvmf_digest_error 00:22:28.019 ************************************ 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=78716 00:22:28.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 78716 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 78716 ']' 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:28.019 14:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:28.277 [2024-11-04 14:48:37.188738] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:22:28.277 [2024-11-04 14:48:37.188794] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.277 [2024-11-04 14:48:37.325288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.277 [2024-11-04 14:48:37.354879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.277 [2024-11-04 14:48:37.355030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.277 [2024-11-04 14:48:37.355085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.277 [2024-11-04 14:48:37.355107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.277 [2024-11-04 14:48:37.355128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.277 [2024-11-04 14:48:37.355434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:29.210 [2024-11-04 14:48:38.048075] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:29.210 [2024-11-04 14:48:38.084008] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:29.210 null0 00:22:29.210 [2024-11-04 14:48:38.120628] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.210 [2024-11-04 14:48:38.144697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:29.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=78748 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 78748 /var/tmp/bperf.sock 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 78748 ']' 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:29.210 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:29.211 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:29.211 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:29.211 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:29.211 [2024-11-04 14:48:38.183597] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:22:29.211 [2024-11-04 14:48:38.183810] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78748 ] 00:22:29.211 [2024-11-04 14:48:38.312008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.211 [2024-11-04 14:48:38.344312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.473 [2024-11-04 14:48:38.373142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:30.038 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:30.038 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:22:30.038 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:30.038 14:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:30.295 14:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:30.295 14:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.295 14:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:30.295 14:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.295 14:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:30.295 14:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:30.295 nvme0n1 00:22:30.295 14:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:30.295 14:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.295 14:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:30.552 14:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.552 14:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:30.552 14:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:30.552 Running I/O for 2 seconds... 00:22:30.552 [2024-11-04 14:48:39.530183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.552 [2024-11-04 14:48:39.530224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.552 [2024-11-04 14:48:39.530233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.552 [2024-11-04 14:48:39.543126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.552 [2024-11-04 14:48:39.543152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.552 [2024-11-04 14:48:39.543159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.552 [2024-11-04 14:48:39.556057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.552 [2024-11-04 14:48:39.556083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.552 [2024-11-04 14:48:39.556089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.552 [2024-11-04 14:48:39.568703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.552 [2024-11-04 14:48:39.568822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.553 [2024-11-04 14:48:39.568832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.553 [2024-11-04 14:48:39.581685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.553 [2024-11-04 14:48:39.581708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.553 [2024-11-04 14:48:39.581714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.553 [2024-11-04 14:48:39.594631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.553 [2024-11-04 14:48:39.594655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.553 [2024-11-04 14:48:39.594661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.553 [2024-11-04 14:48:39.607657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.553 [2024-11-04 14:48:39.607752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.553 [2024-11-04 14:48:39.607760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.553 [2024-11-04 14:48:39.620772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.553 [2024-11-04 14:48:39.620797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.553 [2024-11-04 14:48:39.620803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.553 [2024-11-04 14:48:39.633774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.553 [2024-11-04 14:48:39.633798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.553 [2024-11-04 14:48:39.633803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.553 [2024-11-04 14:48:39.646789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.553 [2024-11-04 14:48:39.646880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.553 [2024-11-04 14:48:39.646888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.553 [2024-11-04 14:48:39.659887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.553 [2024-11-04 14:48:39.659911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.553 [2024-11-04 14:48:39.659916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.553 [2024-11-04 14:48:39.672861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.553 [2024-11-04 14:48:39.672884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.553 [2024-11-04 14:48:39.672890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.553 [2024-11-04 14:48:39.685872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.553 [2024-11-04 14:48:39.685964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.553 [2024-11-04 14:48:39.685973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.810 [2024-11-04 14:48:39.698958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.810 [2024-11-04 14:48:39.698982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.810 [2024-11-04 14:48:39.698988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.810 [2024-11-04 14:48:39.711961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.810 [2024-11-04 14:48:39.711985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.810 [2024-11-04 14:48:39.711991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.810 [2024-11-04 14:48:39.724922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.811 [2024-11-04 14:48:39.725011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.811 [2024-11-04 14:48:39.725019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.811 [2024-11-04 14:48:39.738009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.811 [2024-11-04 14:48:39.738032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.811 [2024-11-04 14:48:39.738038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.811 [2024-11-04 14:48:39.751026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.811 [2024-11-04 14:48:39.751050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.811 [2024-11-04 14:48:39.751056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.811 [2024-11-04 14:48:39.764031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.811 [2024-11-04 14:48:39.764122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.811 [2024-11-04 14:48:39.764130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.811 [2024-11-04 14:48:39.777053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.811 [2024-11-04 14:48:39.777077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.811 [2024-11-04 14:48:39.777083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.811 [2024-11-04 14:48:39.789980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.811 [2024-11-04 14:48:39.790004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.811 [2024-11-04 14:48:39.790010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.811 [2024-11-04 14:48:39.802796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.811 [2024-11-04 14:48:39.802883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.811 [2024-11-04 14:48:39.802890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.811 [2024-11-04 14:48:39.815805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.811 [2024-11-04 14:48:39.815828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.811 [2024-11-04 14:48:39.815834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.811 [2024-11-04 14:48:39.828498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.811 [2024-11-04 14:48:39.828523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.811 [2024-11-04 14:48:39.828529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.811 [2024-11-04 14:48:39.841356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.811 [2024-11-04 14:48:39.841380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.811 [2024-11-04 14:48:39.841386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.811 [2024-11-04 14:48:39.854369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.811 [2024-11-04 14:48:39.854393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.811 [2024-11-04 14:48:39.854399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.811 [2024-11-04 14:48:39.867331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.811 [2024-11-04 14:48:39.867355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.811 [2024-11-04 14:48:39.867361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.811 [2024-11-04 14:48:39.880309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.811 [2024-11-04 14:48:39.880333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.811 [2024-11-04 14:48:39.880338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.811 [2024-11-04 14:48:39.893360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.811 [2024-11-04 14:48:39.893383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.811 [2024-11-04 14:48:39.893389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.811 [2024-11-04 14:48:39.906378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.811 [2024-11-04 14:48:39.906401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.811 [2024-11-04 14:48:39.906407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.811 [2024-11-04 14:48:39.919389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.811 [2024-11-04 14:48:39.919414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.811 [2024-11-04 14:48:39.919419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.811 [2024-11-04 14:48:39.932382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.811 [2024-11-04 14:48:39.932406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.811 [2024-11-04 14:48:39.932412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.811 [2024-11-04 14:48:39.945141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:30.811 [2024-11-04 14:48:39.945164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.811 [2024-11-04 14:48:39.945170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.070 [2024-11-04 14:48:39.958129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.070 [2024-11-04 14:48:39.958152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.070 [2024-11-04 14:48:39.958158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.070 [2024-11-04 14:48:39.971127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.070 [2024-11-04 14:48:39.971223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.070 [2024-11-04 14:48:39.971230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.070 [2024-11-04 14:48:39.984221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.070 [2024-11-04 14:48:39.984245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.070 [2024-11-04 14:48:39.984251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.070 [2024-11-04 14:48:39.997224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.070 [2024-11-04 14:48:39.997247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.070 [2024-11-04 14:48:39.997253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.070 [2024-11-04 14:48:40.010452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.070 [2024-11-04 14:48:40.010547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.070 [2024-11-04 14:48:40.010555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.070 [2024-11-04 14:48:40.023845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.070 [2024-11-04 14:48:40.023870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.070 [2024-11-04 14:48:40.023877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.070 [2024-11-04 14:48:40.037149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.070 [2024-11-04 14:48:40.037256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.070 [2024-11-04 14:48:40.037264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.070 [2024-11-04 14:48:40.050360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.070 [2024-11-04 14:48:40.050385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.070 [2024-11-04 14:48:40.050391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.070 [2024-11-04 14:48:40.063649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.070 [2024-11-04 14:48:40.063741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.070 [2024-11-04 14:48:40.063749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.070 [2024-11-04 14:48:40.076749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.070 [2024-11-04 14:48:40.076773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.070 [2024-11-04 14:48:40.076779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.070 [2024-11-04 14:48:40.090172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.070 [2024-11-04 14:48:40.090196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.070 [2024-11-04 14:48:40.090203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.070 [2024-11-04 14:48:40.103192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.070 [2024-11-04 14:48:40.103282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.070 [2024-11-04 14:48:40.103290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.070 [2024-11-04 14:48:40.116282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.070 [2024-11-04 14:48:40.116306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.070 [2024-11-04 14:48:40.116312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.070 [2024-11-04 14:48:40.129232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.070 [2024-11-04 14:48:40.129255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.070 [2024-11-04 14:48:40.129261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.070 [2024-11-04 14:48:40.142158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.070 [2024-11-04 14:48:40.142245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.070 [2024-11-04 14:48:40.142253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.070 [2024-11-04 14:48:40.155185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.070 [2024-11-04 14:48:40.155209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.070 [2024-11-04 14:48:40.155214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.070 [2024-11-04 14:48:40.167832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.070 [2024-11-04 14:48:40.167855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.070 [2024-11-04 14:48:40.167860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.070 [2024-11-04 14:48:40.180678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.070 [2024-11-04 14:48:40.180767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.071 [2024-11-04 14:48:40.180775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.071 [2024-11-04 14:48:40.193764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.071 [2024-11-04 14:48:40.193787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.071 [2024-11-04 14:48:40.193792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.071 [2024-11-04 14:48:40.206747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.071 [2024-11-04 14:48:40.206770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.071 [2024-11-04 14:48:40.206777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.329 [2024-11-04 14:48:40.219743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.329 [2024-11-04 14:48:40.219843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-04 14:48:40.219851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.329 [2024-11-04 14:48:40.232817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.329 [2024-11-04 14:48:40.232840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-04 14:48:40.232846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.329 [2024-11-04 14:48:40.245722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.329 [2024-11-04 14:48:40.245745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-04 14:48:40.245751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.329 [2024-11-04 14:48:40.258370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.329 [2024-11-04 14:48:40.258394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-04 14:48:40.258400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.329 [2024-11-04 14:48:40.271401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.329 [2024-11-04 14:48:40.271424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-04 14:48:40.271430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.329 [2024-11-04 14:48:40.284407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.329 [2024-11-04 14:48:40.284431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-04 14:48:40.284437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.329 [2024-11-04 14:48:40.297410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.329 [2024-11-04 14:48:40.297433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-04 14:48:40.297439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.329 [2024-11-04 14:48:40.310470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.329 [2024-11-04 14:48:40.310497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-04 14:48:40.310503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.329 [2024-11-04 14:48:40.323495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.329 [2024-11-04 14:48:40.323520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-04 14:48:40.323525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.329 [2024-11-04 14:48:40.336506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.329 [2024-11-04 14:48:40.336529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-04 14:48:40.336535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.329 [2024-11-04 14:48:40.355178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.329 [2024-11-04 14:48:40.355204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-04 14:48:40.355210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.329 [2024-11-04 14:48:40.368207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.329 [2024-11-04 14:48:40.368323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-04 14:48:40.368331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.329 [2024-11-04 14:48:40.381305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.329 [2024-11-04 14:48:40.381331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-04 14:48:40.381336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.329 [2024-11-04 14:48:40.394323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.329 [2024-11-04 14:48:40.394347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-04 14:48:40.394353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.329 [2024-11-04 14:48:40.407301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.329 [2024-11-04 14:48:40.407390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-04 14:48:40.407398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.329 [2024-11-04 14:48:40.420383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.329 [2024-11-04 14:48:40.420478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.330 [2024-11-04 14:48:40.420525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.330 [2024-11-04 14:48:40.433514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.330 [2024-11-04 14:48:40.433625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.330 [2024-11-04 14:48:40.433671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.330 [2024-11-04 14:48:40.446586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.330 [2024-11-04 14:48:40.446701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.330 [2024-11-04 14:48:40.446743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.330 [2024-11-04 14:48:40.459735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.330 [2024-11-04 14:48:40.459828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.330 [2024-11-04 14:48:40.459868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 [2024-11-04 14:48:40.472888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.472979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.473025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 [2024-11-04 14:48:40.486097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.486190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.486234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 [2024-11-04 14:48:40.499330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.499426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.499468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 19356.00 IOPS, 75.61 MiB/s [2024-11-04T14:48:40.728Z] [2024-11-04 14:48:40.512492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.512586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.512647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 [2024-11-04 14:48:40.525713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.525803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.525847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 [2024-11-04 14:48:40.538959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.539053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.539100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 [2024-11-04 14:48:40.552435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.552525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.552571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 [2024-11-04 14:48:40.565682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.565774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.565820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 [2024-11-04 14:48:40.578815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.578904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.578947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 [2024-11-04 14:48:40.591946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.592039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.592085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 [2024-11-04 14:48:40.605107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.605203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.605245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 [2024-11-04 14:48:40.618387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.618496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.618589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 [2024-11-04 14:48:40.631596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.631695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.631737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 [2024-11-04 14:48:40.644595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.644698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.644740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 [2024-11-04 14:48:40.657710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.657800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.657839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 [2024-11-04 14:48:40.670846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.670941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.670983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 [2024-11-04 14:48:40.683963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.684052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.684093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 [2024-11-04 14:48:40.697082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.697172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.697214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 [2024-11-04 14:48:40.710363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.710455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.710500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.588 [2024-11-04 14:48:40.723576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.588 [2024-11-04 14:48:40.723669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.588 [2024-11-04 14:48:40.723677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.846 [2024-11-04 14:48:40.736828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.846 [2024-11-04 14:48:40.736910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.846 [2024-11-04 14:48:40.736918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.846 [2024-11-04 14:48:40.749905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.846 [2024-11-04 14:48:40.749928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.846 [2024-11-04 14:48:40.749933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.846 [2024-11-04 14:48:40.762936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.846 [2024-11-04 14:48:40.762966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.846 [2024-11-04 14:48:40.762972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.846 [2024-11-04 14:48:40.775809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.846 [2024-11-04 14:48:40.775903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.846 [2024-11-04 14:48:40.775911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.846 [2024-11-04 14:48:40.788837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.846 [2024-11-04 14:48:40.788860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.846 [2024-11-04 14:48:40.788865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.846 [2024-11-04 14:48:40.801819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.846 [2024-11-04 14:48:40.801847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.846 [2024-11-04 14:48:40.801853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.846 [2024-11-04 14:48:40.814729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.846 [2024-11-04 14:48:40.814826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.846 [2024-11-04 14:48:40.814833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.846 [2024-11-04 14:48:40.827809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.846 [2024-11-04 14:48:40.827832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.846 [2024-11-04 14:48:40.827838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.846 [2024-11-04 14:48:40.840802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.847 [2024-11-04 14:48:40.840824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.847 [2024-11-04 14:48:40.840831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.847 [2024-11-04 14:48:40.853805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.847 [2024-11-04 14:48:40.853889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.847 [2024-11-04 14:48:40.853897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.847 [2024-11-04 14:48:40.866812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.847 [2024-11-04 14:48:40.866836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.847 [2024-11-04 14:48:40.866843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.847 [2024-11-04 14:48:40.879789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.847 [2024-11-04 14:48:40.879812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.847 [2024-11-04 14:48:40.879818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.847 [2024-11-04 14:48:40.892789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.847 [2024-11-04 14:48:40.892877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.847 [2024-11-04 14:48:40.892885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.847 [2024-11-04 14:48:40.905831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.847 [2024-11-04 14:48:40.905853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.847 [2024-11-04 14:48:40.905858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.847 [2024-11-04 14:48:40.918437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.847 [2024-11-04 14:48:40.918461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.847 [2024-11-04 14:48:40.918467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.847 [2024-11-04 14:48:40.931307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.847 [2024-11-04 14:48:40.931330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.847 [2024-11-04 14:48:40.931336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.847 [2024-11-04 14:48:40.943936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.847 [2024-11-04 14:48:40.944024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.847 [2024-11-04 14:48:40.944031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.847 [2024-11-04 14:48:40.956927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.847 [2024-11-04 14:48:40.956950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.847 [2024-11-04 14:48:40.956956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.847 [2024-11-04 14:48:40.969925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.847 [2024-11-04 14:48:40.969948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.847 [2024-11-04 14:48:40.969954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.847 [2024-11-04 14:48:40.982894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:31.847 [2024-11-04 14:48:40.982982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.847 [2024-11-04 14:48:40.982990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.105 [2024-11-04 14:48:40.995960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.105 [2024-11-04 14:48:40.995983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.105 [2024-11-04 14:48:40.995989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.105 [2024-11-04 14:48:41.008962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.105 [2024-11-04 14:48:41.008985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.105 [2024-11-04 14:48:41.008991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.105 [2024-11-04 14:48:41.021767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.105 [2024-11-04 14:48:41.021857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.105 [2024-11-04 14:48:41.021864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.105 [2024-11-04 14:48:41.034763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.105 [2024-11-04 14:48:41.034787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.105 [2024-11-04 14:48:41.034793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.105 [2024-11-04 14:48:41.047677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.105 [2024-11-04 14:48:41.047701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.105 [2024-11-04 14:48:41.047707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.105 [2024-11-04 14:48:41.060700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.105 [2024-11-04 14:48:41.060798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.105 [2024-11-04 14:48:41.060806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.105 [2024-11-04 14:48:41.073805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.105 [2024-11-04 14:48:41.073828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.105 [2024-11-04 14:48:41.073834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.105 [2024-11-04 14:48:41.086797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.105 [2024-11-04 14:48:41.086821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.105 [2024-11-04 14:48:41.086827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.105 [2024-11-04 14:48:41.099854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.105 [2024-11-04 14:48:41.099945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.105 [2024-11-04 14:48:41.099953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.105 [2024-11-04 14:48:41.112933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.105 [2024-11-04 14:48:41.112957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.105 [2024-11-04 14:48:41.112962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.105 [2024-11-04 14:48:41.125932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.105 [2024-11-04 14:48:41.125955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.105 [2024-11-04 14:48:41.125961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.105 [2024-11-04 14:48:41.138930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.105 [2024-11-04 14:48:41.139025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.105 [2024-11-04 14:48:41.139033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.105 [2024-11-04 14:48:41.152012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.105 [2024-11-04 14:48:41.152035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.105 [2024-11-04 14:48:41.152042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.105 [2024-11-04 14:48:41.165030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.105 [2024-11-04 14:48:41.165056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.105 [2024-11-04 14:48:41.165062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.105 [2024-11-04 14:48:41.178032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.105 [2024-11-04 14:48:41.178128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.105 [2024-11-04 14:48:41.178136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.105 [2024-11-04 14:48:41.197053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.105 [2024-11-04 14:48:41.197145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.105 [2024-11-04 14:48:41.197153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.105 [2024-11-04 14:48:41.210152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.105 [2024-11-04 14:48:41.210176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.105 [2024-11-04 14:48:41.210182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.105 [2024-11-04 14:48:41.223165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.105 [2024-11-04 14:48:41.223189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.105 [2024-11-04 14:48:41.223195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.105 [2024-11-04 14:48:41.236164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.105 [2024-11-04 14:48:41.236253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.105 [2024-11-04 14:48:41.236260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.249254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.249278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.249284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.262257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.262281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.262287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.275094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.275182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.275190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.288167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.288192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.288198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.301120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.301143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.301149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.314119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.314203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.314210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.327204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.327228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.327234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.340208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.340232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.340238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.353217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.353313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.353321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.366312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.366404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.366445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.379309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.379403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.379488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.392401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.392497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.392538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.405157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.405248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.405293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.417934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.418023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.418069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.430895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.430985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.431027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.443975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.444065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.444107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.457079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.457174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.457216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.470143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.470232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.470274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.483166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.483257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.483299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.363 [2024-11-04 14:48:41.496243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.363 [2024-11-04 14:48:41.496342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.363 [2024-11-04 14:48:41.496384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.621 [2024-11-04 14:48:41.510694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14dc370) 00:22:32.621 [2024-11-04 14:48:41.510787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.621 [2024-11-04 14:48:41.510830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.621 19355.50 IOPS, 75.61 MiB/s 00:22:32.621 Latency(us) 00:22:32.621 [2024-11-04T14:48:41.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.621 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:32.621 nvme0n1 : 2.01 19393.43 75.76 0.00 0.00 6595.49 6150.30 25206.15 00:22:32.621 [2024-11-04T14:48:41.761Z] =================================================================================================================== 00:22:32.621 [2024-11-04T14:48:41.761Z] Total : 19393.43 75.76 0.00 0.00 6595.49 6150.30 25206.15 00:22:32.621 { 00:22:32.621 "results": [ 00:22:32.621 { 00:22:32.621 "job": "nvme0n1", 00:22:32.621 "core_mask": "0x2", 00:22:32.621 "workload": "randread", 00:22:32.621 "status": "finished", 00:22:32.621 "queue_depth": 128, 00:22:32.621 "io_size": 4096, 00:22:32.621 "runtime": 2.009186, 00:22:32.621 "iops": 19393.425994407684, 00:22:32.621 "mibps": 75.75557029065502, 00:22:32.621 "io_failed": 0, 00:22:32.621 "io_timeout": 0, 00:22:32.621 "avg_latency_us": 6595.494354894432, 00:22:32.621 "min_latency_us": 6150.301538461538, 00:22:32.621 "max_latency_us": 25206.153846153848 00:22:32.621 } 00:22:32.621 ], 00:22:32.621 "core_count": 1 00:22:32.621 } 00:22:32.621 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:32.621 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:32.621 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:32.621 | .driver_specific 00:22:32.621 | .nvme_error 00:22:32.621 | .status_code 00:22:32.621 | .command_transient_transport_error' 00:22:32.621 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:32.621 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 152 > 0 )) 00:22:32.621 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 78748 00:22:32.621 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 78748 ']' 00:22:32.621 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 78748 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78748 00:22:32.878 killing process with pid 78748 00:22:32.878 Received shutdown signal, test time was about 2.000000 seconds 00:22:32.878 00:22:32.878 Latency(us) 00:22:32.878 [2024-11-04T14:48:42.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.878 [2024-11-04T14:48:42.018Z] =================================================================================================================== 00:22:32.878 [2024-11-04T14:48:42.018Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78748' 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 78748 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 78748 00:22:32.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=78802 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 78802 /var/tmp/bperf.sock 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 78802 ']' 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:32.878 14:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:32.878 [2024-11-04 14:48:41.911842] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:22:32.878 [2024-11-04 14:48:41.912034] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78802 ] 00:22:32.878 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:32.878 Zero copy mechanism will not be used. 00:22:33.135 [2024-11-04 14:48:42.046558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.135 [2024-11-04 14:48:42.077335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.135 [2024-11-04 14:48:42.105401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:33.135 14:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:33.135 14:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:22:33.135 14:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:33.135 14:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:33.392 14:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:33.392 14:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.392 14:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:33.392 14:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.392 14:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:33.392 14:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:33.669 nvme0n1 00:22:33.669 14:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:33.669 14:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.669 14:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:33.669 14:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.669 14:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:33.669 14:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:33.669 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:33.669 Zero copy mechanism will not be used. 00:22:33.669 Running I/O for 2 seconds... 00:22:33.669 [2024-11-04 14:48:42.737728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.669 [2024-11-04 14:48:42.737762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.669 [2024-11-04 14:48:42.737771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.669 [2024-11-04 14:48:42.740534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.669 [2024-11-04 14:48:42.740559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.669 [2024-11-04 14:48:42.740565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.669 [2024-11-04 14:48:42.743418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.669 [2024-11-04 14:48:42.743441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.669 [2024-11-04 14:48:42.743447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.669 [2024-11-04 14:48:42.746216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.669 [2024-11-04 14:48:42.746239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.669 [2024-11-04 14:48:42.746245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.669 [2024-11-04 14:48:42.749050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.669 [2024-11-04 14:48:42.749168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.669 [2024-11-04 14:48:42.749176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.669 [2024-11-04 14:48:42.752007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.669 [2024-11-04 14:48:42.752030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.669 [2024-11-04 14:48:42.752036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.669 [2024-11-04 14:48:42.754933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.669 [2024-11-04 14:48:42.755041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.669 [2024-11-04 14:48:42.755110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.669 [2024-11-04 14:48:42.757961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.669 [2024-11-04 14:48:42.758056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.669 [2024-11-04 14:48:42.758099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.669 [2024-11-04 14:48:42.760940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.669 [2024-11-04 14:48:42.761034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.669 [2024-11-04 14:48:42.761080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.669 [2024-11-04 14:48:42.763932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.669 [2024-11-04 14:48:42.764023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.669 [2024-11-04 14:48:42.764057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.669 [2024-11-04 14:48:42.766838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.669 [2024-11-04 14:48:42.766861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.669 [2024-11-04 14:48:42.766868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.669 [2024-11-04 14:48:42.769728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.669 [2024-11-04 14:48:42.769816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.669 [2024-11-04 14:48:42.769880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.669 [2024-11-04 14:48:42.772798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.669 [2024-11-04 14:48:42.772891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.669 [2024-11-04 14:48:42.772919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.669 [2024-11-04 14:48:42.775801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.669 [2024-11-04 14:48:42.775896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.669 [2024-11-04 14:48:42.775949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.669 [2024-11-04 14:48:42.778834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.669 [2024-11-04 14:48:42.778926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.669 [2024-11-04 14:48:42.778971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.669 [2024-11-04 14:48:42.781824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.669 [2024-11-04 14:48:42.781917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.669 [2024-11-04 14:48:42.781963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.669 [2024-11-04 14:48:42.784793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.669 [2024-11-04 14:48:42.784884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.670 [2024-11-04 14:48:42.784892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.670 [2024-11-04 14:48:42.787705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.670 [2024-11-04 14:48:42.787728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.670 [2024-11-04 14:48:42.787734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.670 [2024-11-04 14:48:42.790520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.670 [2024-11-04 14:48:42.790547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.670 [2024-11-04 14:48:42.790553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.670 [2024-11-04 14:48:42.793339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.670 [2024-11-04 14:48:42.793365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.670 [2024-11-04 14:48:42.793371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.670 [2024-11-04 14:48:42.796165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.670 [2024-11-04 14:48:42.796190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.670 [2024-11-04 14:48:42.796196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.670 [2024-11-04 14:48:42.799016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.670 [2024-11-04 14:48:42.799114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.670 [2024-11-04 14:48:42.799122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.670 [2024-11-04 14:48:42.801966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.670 [2024-11-04 14:48:42.801993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.670 [2024-11-04 14:48:42.801998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.670 [2024-11-04 14:48:42.804831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.670 [2024-11-04 14:48:42.804856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.670 [2024-11-04 14:48:42.804862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.670 [2024-11-04 14:48:42.807703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.670 [2024-11-04 14:48:42.807727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.670 [2024-11-04 14:48:42.807733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.929 [2024-11-04 14:48:42.810541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.929 [2024-11-04 14:48:42.810650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.929 [2024-11-04 14:48:42.810658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.929 [2024-11-04 14:48:42.813443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.929 [2024-11-04 14:48:42.813470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.929 [2024-11-04 14:48:42.813476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.929 [2024-11-04 14:48:42.816299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.929 [2024-11-04 14:48:42.816325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.929 [2024-11-04 14:48:42.816331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.929 [2024-11-04 14:48:42.819119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.929 [2024-11-04 14:48:42.819145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.929 [2024-11-04 14:48:42.819150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.929 [2024-11-04 14:48:42.821920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.929 [2024-11-04 14:48:42.822010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.929 [2024-11-04 14:48:42.822017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.929 [2024-11-04 14:48:42.824853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.929 [2024-11-04 14:48:42.824878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.929 [2024-11-04 14:48:42.824883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.929 [2024-11-04 14:48:42.827666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.929 [2024-11-04 14:48:42.827689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.929 [2024-11-04 14:48:42.827695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.929 [2024-11-04 14:48:42.830500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.929 [2024-11-04 14:48:42.830526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.929 [2024-11-04 14:48:42.830531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.833335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.833425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.833433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.836211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.836236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.836242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.839073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.839099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.839105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.841894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.841919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.841925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.844679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.844702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.844707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.847476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.847503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.847509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.850320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.850347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.850352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.853123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.853148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.853154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.855954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.856046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.856054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.858846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.858871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.858876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.861689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.861711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.861717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.864466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.864491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.864496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.867229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.867319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.867327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.870077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.870099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.870104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.872804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.872828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.872834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.875592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.875631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.875637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.878447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.878539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.878546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.881361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.881388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.881393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.884221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.884247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.884252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.887094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.887122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.887128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.889942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.890038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.890045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.892832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.892859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.892864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.895678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.930 [2024-11-04 14:48:42.895701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.930 [2024-11-04 14:48:42.895707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.930 [2024-11-04 14:48:42.898489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.898514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.898520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.901278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.901369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.901377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.904134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.904160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.904165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.906930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.906956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.906961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.909737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.909760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.909765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.912485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.912575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.912582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.915262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.915287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.915292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.918025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.918051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.918056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.920855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.920880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.920886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.923701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.923723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.923728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.926518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.926544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.926550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.929331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.929355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.929361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.932093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.932116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.932122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.934848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.934942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.934950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.937680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.937702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.937707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.940446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.940472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.940477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.943199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.943224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.943230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.945960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.946049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.946056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.948789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.948813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.948818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.951562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.951588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.951593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.954392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.954418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.954424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.957221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.957310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.957317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.960123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.960150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.960156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.962962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.962988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.962994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.965802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.965826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.965831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.968582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.931 [2024-11-04 14:48:42.968687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.931 [2024-11-04 14:48:42.968695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.931 [2024-11-04 14:48:42.971500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:42.971527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:42.971533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:42.974320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:42.974345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:42.974351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:42.977142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:42.977168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:42.977173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:42.979966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:42.980058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:42.980065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:42.982869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:42.982896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:42.982902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:42.985673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:42.985696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:42.985702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:42.988465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:42.988490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:42.988496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:42.991305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:42.991398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:42.991406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:42.994239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:42.994266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:42.994271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:42.997041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:42.997066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:42.997072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:42.999887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:42.999913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:42.999918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:43.002723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:43.002747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:43.002753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:43.005526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:43.005552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:43.005570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:43.008381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:43.008407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:43.008413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:43.011199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:43.011225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:43.011230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:43.014060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:43.014152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:43.014159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:43.016941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:43.016967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:43.016973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:43.019777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:43.019801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:43.019807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:43.022590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:43.022628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:43.022634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:43.025414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:43.025502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:43.025510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:43.028311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:43.028338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:43.028344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:43.031147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:43.031173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:43.031179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:43.033976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:43.034001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:43.034007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:43.036802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:43.036894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:43.036902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:43.039699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:43.039724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.932 [2024-11-04 14:48:43.039729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.932 [2024-11-04 14:48:43.042547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.932 [2024-11-04 14:48:43.042573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.933 [2024-11-04 14:48:43.042578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.933 [2024-11-04 14:48:43.045360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.933 [2024-11-04 14:48:43.045386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.933 [2024-11-04 14:48:43.045392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.933 [2024-11-04 14:48:43.048218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.933 [2024-11-04 14:48:43.048311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.933 [2024-11-04 14:48:43.048319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.933 [2024-11-04 14:48:43.051135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.933 [2024-11-04 14:48:43.051158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.933 [2024-11-04 14:48:43.051163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.933 [2024-11-04 14:48:43.053942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.933 [2024-11-04 14:48:43.053967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.933 [2024-11-04 14:48:43.053974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.933 [2024-11-04 14:48:43.056784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.933 [2024-11-04 14:48:43.056809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.933 [2024-11-04 14:48:43.056815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.933 [2024-11-04 14:48:43.059627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.933 [2024-11-04 14:48:43.059651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.933 [2024-11-04 14:48:43.059657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.933 [2024-11-04 14:48:43.062489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.933 [2024-11-04 14:48:43.062515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.933 [2024-11-04 14:48:43.062521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.933 [2024-11-04 14:48:43.065370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:33.933 [2024-11-04 14:48:43.065397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.933 [2024-11-04 14:48:43.065402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.068221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.068247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.068253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.071085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.071186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.071194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.074014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.074040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.074046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.076837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.076862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.076868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.079651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.079674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.079680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.082468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.082564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.082571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.085381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.085408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.085413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.088204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.088230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.088236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.091061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.091087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.091093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.093892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.093981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.093989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.096784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.096809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.096815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.099586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.099619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.099625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.102407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.102433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.102439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.105235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.105329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.105336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.108147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.108174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.108180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.111019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.111044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.111050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.113830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.113855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.113861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.116643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.116666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.116672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.119456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.119482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.119487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.122294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.122319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.122325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.125078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.125103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.125109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.127939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.128032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.128039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.130828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.130853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.130859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.133642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.133664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.133670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.136473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.193 [2024-11-04 14:48:43.136499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.193 [2024-11-04 14:48:43.136505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.193 [2024-11-04 14:48:43.139322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.139445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.139453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.142251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.142277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.142283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.145064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.145089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.145095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.147908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.147943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.147949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.150746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.150769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.150774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.153535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.153566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.153572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.156369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.156395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.156400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.159192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.159217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.159222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.162005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.162099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.162107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.164925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.164950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.164956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.167771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.167795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.167800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.170619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.170642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.170647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.173430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.173519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.173526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.176285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.176311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.176316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.179030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.179055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.179060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.181801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.181825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.181830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.184631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.184655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.184660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.187483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.187508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.187513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.190341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.190367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.190372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.193170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.193195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.193200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.196013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.196104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.196112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.198923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.198949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.198954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.201738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.201761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.201766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.204558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.204583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.204589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.207405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.207496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.207504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.210322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.210349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.210355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.213175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.213202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.194 [2024-11-04 14:48:43.213208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.194 [2024-11-04 14:48:43.216020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.194 [2024-11-04 14:48:43.216045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.216051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.218841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.218933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.218940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.221737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.221761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.221766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.224539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.224566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.224572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.227335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.227361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.227367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.230148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.230238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.230245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.233028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.233050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.233056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.235843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.235868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.235874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.238698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.238721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.238727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.241514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.241629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.241637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.244445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.244471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.244477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.247322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.247349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.247356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.250161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.250188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.250194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.253011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.253108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.253115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.255924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.255950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.255956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.258764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.258788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.258793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.261547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.261578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.261584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.264418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.264509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.264516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.267326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.267352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.267357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.270189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.270215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.270221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.273010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.273035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.273041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.275864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.275956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.275963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.278771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.278796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.278802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.281576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.281599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.281613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.284383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.284409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.284414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.287232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.287322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.287330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.290143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.290165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.195 [2024-11-04 14:48:43.290170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.195 [2024-11-04 14:48:43.292967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.195 [2024-11-04 14:48:43.292993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.196 [2024-11-04 14:48:43.292999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.196 [2024-11-04 14:48:43.295811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.196 [2024-11-04 14:48:43.295837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.196 [2024-11-04 14:48:43.295842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.196 [2024-11-04 14:48:43.298667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.196 [2024-11-04 14:48:43.298690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.196 [2024-11-04 14:48:43.298696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.196 [2024-11-04 14:48:43.301521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.196 [2024-11-04 14:48:43.301547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.196 [2024-11-04 14:48:43.301552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.196 [2024-11-04 14:48:43.304449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.196 [2024-11-04 14:48:43.304475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.196 [2024-11-04 14:48:43.304481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.196 [2024-11-04 14:48:43.307368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.196 [2024-11-04 14:48:43.307394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.196 [2024-11-04 14:48:43.307399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.196 [2024-11-04 14:48:43.310211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.196 [2024-11-04 14:48:43.310302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.196 [2024-11-04 14:48:43.310309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.196 [2024-11-04 14:48:43.313111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.196 [2024-11-04 14:48:43.313137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.196 [2024-11-04 14:48:43.313142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.196 [2024-11-04 14:48:43.315912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.196 [2024-11-04 14:48:43.315937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.196 [2024-11-04 14:48:43.315943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.196 [2024-11-04 14:48:43.318743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.196 [2024-11-04 14:48:43.318766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.196 [2024-11-04 14:48:43.318772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.196 [2024-11-04 14:48:43.321534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.196 [2024-11-04 14:48:43.321643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.196 [2024-11-04 14:48:43.321651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.196 [2024-11-04 14:48:43.324441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.196 [2024-11-04 14:48:43.324467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.196 [2024-11-04 14:48:43.324472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.196 [2024-11-04 14:48:43.327247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.196 [2024-11-04 14:48:43.327273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.196 [2024-11-04 14:48:43.327279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.196 [2024-11-04 14:48:43.330111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.196 [2024-11-04 14:48:43.330137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.196 [2024-11-04 14:48:43.330143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.455 [2024-11-04 14:48:43.332939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.455 [2024-11-04 14:48:43.333028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.455 [2024-11-04 14:48:43.333036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.455 [2024-11-04 14:48:43.335843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.335868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.335874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.338660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.338683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.338688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.341489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.341515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.341521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.344331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.344421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.344429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.347225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.347251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.347257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.350054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.350080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.350086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.352891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.352916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.352921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.355732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.355764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.355770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.358539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.358564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.358570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.361380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.361405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.361411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.364184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.364209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.364215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.367034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.367130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.367138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.369918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.369943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.369948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.372745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.372768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.372774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.375544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.375570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.375575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.378380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.378470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.378477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.381272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.381297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.381303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.384071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.384096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.384102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.386886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.386911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.386917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.389711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.389734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.389739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.392504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.392529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.392535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.395334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.395359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.395365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.398169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.398194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.398200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.400975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.401066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.401074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.403887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.403912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.403918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.406728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.406751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.456 [2024-11-04 14:48:43.406757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.456 [2024-11-04 14:48:43.409566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.456 [2024-11-04 14:48:43.409594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.409600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.412382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.412476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.412484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.415293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.415319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.415325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.418150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.418175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.418181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.420948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.420974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.420979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.423814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.423839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.423845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.426645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.426669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.426674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.429471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.429496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.429502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.432333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.432358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.432364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.435172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.435266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.435274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.438083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.438106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.438111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.440893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.440918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.440924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.443707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.443730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.443736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.446513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.446623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.446632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.449426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.449448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.449453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.452270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.452296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.452302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.455125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.455150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.455156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.457971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.458063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.458070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.460889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.460914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.460920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.463750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.463773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.463779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.466562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.466588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.466593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.469407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.469497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.469504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.472311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.472337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.472343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.475130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.475156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.475162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.478014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.478039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.478045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.480834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.480924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.480932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.483709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.483731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.457 [2024-11-04 14:48:43.483737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.457 [2024-11-04 14:48:43.486520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.457 [2024-11-04 14:48:43.486545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.486551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.489340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.489365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.489371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.492177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.492268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.492276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.495062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.495089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.495095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.497901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.497926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.497932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.500746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.500771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.500776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.503570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.503679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.503687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.506512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.506538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.506544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.509344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.509370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.509376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.512182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.512207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.512213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.515029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.515120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.515128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.517947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.517973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.517979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.520784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.520808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.520814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.523635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.523658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.523664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.526464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.526553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.526560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.529346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.529372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.529378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.532181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.532207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.532212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.534970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.534996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.535002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.537748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.537770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.537776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.540484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.540510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.540515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.543270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.543295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.543301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.546030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.546056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.546062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.548814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.548907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.548915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.551666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.551688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.551693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.554398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.554423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.554429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.458 [2024-11-04 14:48:43.557202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.458 [2024-11-04 14:48:43.557227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.458 [2024-11-04 14:48:43.557232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.459 [2024-11-04 14:48:43.560023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.459 [2024-11-04 14:48:43.560116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.459 [2024-11-04 14:48:43.560124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.459 [2024-11-04 14:48:43.562921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.459 [2024-11-04 14:48:43.562946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.459 [2024-11-04 14:48:43.562952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.459 [2024-11-04 14:48:43.565730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.459 [2024-11-04 14:48:43.565753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.459 [2024-11-04 14:48:43.565758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.459 [2024-11-04 14:48:43.568468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.459 [2024-11-04 14:48:43.568494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.459 [2024-11-04 14:48:43.568499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.459 [2024-11-04 14:48:43.571229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.459 [2024-11-04 14:48:43.571318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.459 [2024-11-04 14:48:43.571326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.459 [2024-11-04 14:48:43.574099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.459 [2024-11-04 14:48:43.574126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.459 [2024-11-04 14:48:43.574132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.459 [2024-11-04 14:48:43.576925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.459 [2024-11-04 14:48:43.576950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.459 [2024-11-04 14:48:43.576955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.459 [2024-11-04 14:48:43.579724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.459 [2024-11-04 14:48:43.579748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.459 [2024-11-04 14:48:43.579753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.459 [2024-11-04 14:48:43.582505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.459 [2024-11-04 14:48:43.582595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.459 [2024-11-04 14:48:43.582603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.459 [2024-11-04 14:48:43.585433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.459 [2024-11-04 14:48:43.585454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.459 [2024-11-04 14:48:43.585460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.459 [2024-11-04 14:48:43.588292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.459 [2024-11-04 14:48:43.588318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.459 [2024-11-04 14:48:43.588324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.459 [2024-11-04 14:48:43.591167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.459 [2024-11-04 14:48:43.591192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.459 [2024-11-04 14:48:43.591198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.717 [2024-11-04 14:48:43.594002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.594092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.594099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.596894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.596919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.596925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.599710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.599733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.599739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.602521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.602546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.602552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.605343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.605431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.605438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.608268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.608293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.608299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.611102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.611128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.611133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.613878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.613902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.613908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.616679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.616701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.616707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.619534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.619560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.619565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.622395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.622420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.622426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.625214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.625240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.625245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.628058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.628150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.628158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.630987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.631014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.631020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.633806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.633830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.633836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.636556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.636582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.636588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.639314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.639400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.639407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.642118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.642144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.642149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.644889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.644913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.644919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.647745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.647768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.647774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.650551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.650655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.650663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.653454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.653476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.653481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.656306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.656332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.656338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.659163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.659188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.659194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.661960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.662050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.662058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.664856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.664882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.718 [2024-11-04 14:48:43.664887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.718 [2024-11-04 14:48:43.667652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.718 [2024-11-04 14:48:43.667685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.667690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.670390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.670415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.670420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.673183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.673273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.673281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.676092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.676114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.676120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.678908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.678933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.678938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.681722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.681745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.681750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.684531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.684630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.684638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.687475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.687501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.687506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.690286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.690311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.690317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.693134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.693161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.693166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.695979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.696068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.696075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.698855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.698880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.698885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.701707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.701729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.701735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.704534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.704560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.704565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.707353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.707442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.707449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.710262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.710288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.710294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.713056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.713082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.713087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.715890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.715917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.715923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.718697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.718720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.718725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.721484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.721510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.721515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.724311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.724336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.724342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.727144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.727170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.727176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.719 10835.00 IOPS, 1354.38 MiB/s [2024-11-04T14:48:43.859Z] [2024-11-04 14:48:43.731190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.731213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.731219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.734048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.734140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.734147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.736924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.736950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.736956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.739761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.739786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.739791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.742487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.719 [2024-11-04 14:48:43.742513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.719 [2024-11-04 14:48:43.742519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.719 [2024-11-04 14:48:43.745229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.745317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.745324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.748070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.748093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.748098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.750840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.750864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.750869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.753614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.753636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.753641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.756327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.756414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.756422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.759137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.759162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.759168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.761869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.761893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.761899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.764580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.764617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.764623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.767341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.767429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.767436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.770190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.770217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.770223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.772992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.773017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.773023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.775801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.775825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.775831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.778666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.778689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.778695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.781453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.781479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.781484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.784261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.784285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.784290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.787131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.787157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.787163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.789934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.790024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.790032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.792817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.792841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.792847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.795656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.795677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.795683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.798509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.798532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.798538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.801297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.801386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.801394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.804252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.804275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.804281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.807073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.807168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.807232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.810102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.810125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.810131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.812915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.812937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.812943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.815771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.815793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.720 [2024-11-04 14:48:43.815798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.720 [2024-11-04 14:48:43.818674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.720 [2024-11-04 14:48:43.818765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.721 [2024-11-04 14:48:43.818827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.721 [2024-11-04 14:48:43.821689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.721 [2024-11-04 14:48:43.821776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.721 [2024-11-04 14:48:43.821822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.721 [2024-11-04 14:48:43.824699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.721 [2024-11-04 14:48:43.824788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.721 [2024-11-04 14:48:43.824845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.721 [2024-11-04 14:48:43.827695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.721 [2024-11-04 14:48:43.827785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.721 [2024-11-04 14:48:43.827829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.721 [2024-11-04 14:48:43.830625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.721 [2024-11-04 14:48:43.830714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.721 [2024-11-04 14:48:43.830756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.721 [2024-11-04 14:48:43.833570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.721 [2024-11-04 14:48:43.833675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.721 [2024-11-04 14:48:43.833684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.721 [2024-11-04 14:48:43.836367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.721 [2024-11-04 14:48:43.836387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.721 [2024-11-04 14:48:43.836392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.721 [2024-11-04 14:48:43.839124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.721 [2024-11-04 14:48:43.839215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.721 [2024-11-04 14:48:43.839259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.721 [2024-11-04 14:48:43.842023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.721 [2024-11-04 14:48:43.842114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.721 [2024-11-04 14:48:43.842162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.721 [2024-11-04 14:48:43.844961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.721 [2024-11-04 14:48:43.845051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.721 [2024-11-04 14:48:43.845099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.721 [2024-11-04 14:48:43.847898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.721 [2024-11-04 14:48:43.847991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.721 [2024-11-04 14:48:43.848038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.721 [2024-11-04 14:48:43.850874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.721 [2024-11-04 14:48:43.850965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.721 [2024-11-04 14:48:43.851008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.721 [2024-11-04 14:48:43.853790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.721 [2024-11-04 14:48:43.853878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.721 [2024-11-04 14:48:43.853923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.980 [2024-11-04 14:48:43.856749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.980 [2024-11-04 14:48:43.856837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.980 [2024-11-04 14:48:43.856882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.980 [2024-11-04 14:48:43.859726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.980 [2024-11-04 14:48:43.859813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.859861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.862698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.862785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.862829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.865627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.865713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.865758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.868571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.868674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.868722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.871635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.871724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.871767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.874618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.874706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.874776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.877598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.877702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.877749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.880577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.880676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.880719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.883566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.883668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.883719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.886529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.886637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.886681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.889476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.889574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.889674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.892485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.892579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.892631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.895437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.895530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.895573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.898301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.898394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.898443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.901269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.901362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.901409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.904301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.904392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.904435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.907267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.907361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.907406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.910250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.910343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.910386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.913220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.913312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.913357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.916205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.916298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.916344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.919155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.919248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.919293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.922092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.922185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.922244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.925041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.925134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.925176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.928068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.928161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.928206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.931054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.931148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.931192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.934034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.934126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.934170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.936977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.981 [2024-11-04 14:48:43.937068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.981 [2024-11-04 14:48:43.937109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.981 [2024-11-04 14:48:43.939881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.939973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.940017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.942779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.942869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.942911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.945691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.945781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.945826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.948589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.948697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.948733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.951510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.951538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.951544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.954337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.954363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.954369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.957204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.957295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.957302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.960123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.960148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.960154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.962967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.962992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.962997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.965801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.965825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.965831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.968665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.968688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.968694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.971462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.971487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.971493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.974325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.974350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.974356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.977158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.977184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.977190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.980015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.980110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.980118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.982924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.982951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.982957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.985758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.985781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.985787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.988945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.988971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.988977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.991807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.991831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.991837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.994640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.994664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.994670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:43.997428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:43.997453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:43.997458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:44.000178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:44.000204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:44.000210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:44.003043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:44.003069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:44.003075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:44.005905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:44.005997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:44.006005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:44.008837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:44.008862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:44.008868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:44.011648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:44.011671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:44.011677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:44.014496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.982 [2024-11-04 14:48:44.014521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.982 [2024-11-04 14:48:44.014527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.982 [2024-11-04 14:48:44.017304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.017394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.017401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.020218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.020243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.020249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.023008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.023033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.023039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.025854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.025879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.025885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.028676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.028699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.028704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.031496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.031521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.031527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.034307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.034332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.034338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.037141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.037166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.037171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.039999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.040090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.040097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.042887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.042912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.042918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.045723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.045747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.045752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.048498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.048524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.048530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.051314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.051403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.051410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.054223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.054316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.054361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.057226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.057317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.057364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.060173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.060267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.060312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.063136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.063229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.063274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.066063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.066153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.066209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.068925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.069015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.069121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.072175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.072284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.072330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.075097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.075184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.075192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.077928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.077955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.077961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.080695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.080717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.080723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.083476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.083502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.083508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.086269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.086358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.086366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.089094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.089120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.089125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.091859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.983 [2024-11-04 14:48:44.091883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.983 [2024-11-04 14:48:44.091889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.983 [2024-11-04 14:48:44.094602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.984 [2024-11-04 14:48:44.094634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.984 [2024-11-04 14:48:44.094639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.984 [2024-11-04 14:48:44.097389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.984 [2024-11-04 14:48:44.097475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.984 [2024-11-04 14:48:44.097483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.984 [2024-11-04 14:48:44.100220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.984 [2024-11-04 14:48:44.100246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.984 [2024-11-04 14:48:44.100251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.984 [2024-11-04 14:48:44.103025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.984 [2024-11-04 14:48:44.103050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.984 [2024-11-04 14:48:44.103055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.984 [2024-11-04 14:48:44.105788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.984 [2024-11-04 14:48:44.105811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.984 [2024-11-04 14:48:44.105817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.984 [2024-11-04 14:48:44.108616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.984 [2024-11-04 14:48:44.108640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.984 [2024-11-04 14:48:44.108645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.984 [2024-11-04 14:48:44.111456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.984 [2024-11-04 14:48:44.111482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.984 [2024-11-04 14:48:44.111488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.984 [2024-11-04 14:48:44.114310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:34.984 [2024-11-04 14:48:44.114335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.984 [2024-11-04 14:48:44.114341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.244 [2024-11-04 14:48:44.117142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.244 [2024-11-04 14:48:44.117233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.244 [2024-11-04 14:48:44.117240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.244 [2024-11-04 14:48:44.120037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.244 [2024-11-04 14:48:44.120059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.244 [2024-11-04 14:48:44.120065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.244 [2024-11-04 14:48:44.122865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.244 [2024-11-04 14:48:44.122890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.244 [2024-11-04 14:48:44.122896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.244 [2024-11-04 14:48:44.125718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.244 [2024-11-04 14:48:44.125741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.244 [2024-11-04 14:48:44.125746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.244 [2024-11-04 14:48:44.128532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.244 [2024-11-04 14:48:44.128635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.244 [2024-11-04 14:48:44.128643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.244 [2024-11-04 14:48:44.131418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.244 [2024-11-04 14:48:44.131441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.244 [2024-11-04 14:48:44.131446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.244 [2024-11-04 14:48:44.134240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.244 [2024-11-04 14:48:44.134266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.244 [2024-11-04 14:48:44.134272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.244 [2024-11-04 14:48:44.137044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.244 [2024-11-04 14:48:44.137070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.244 [2024-11-04 14:48:44.137075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.244 [2024-11-04 14:48:44.139868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.244 [2024-11-04 14:48:44.139956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.244 [2024-11-04 14:48:44.139964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.244 [2024-11-04 14:48:44.142726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.244 [2024-11-04 14:48:44.142750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.244 [2024-11-04 14:48:44.142755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.244 [2024-11-04 14:48:44.145534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.244 [2024-11-04 14:48:44.145571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.244 [2024-11-04 14:48:44.145577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.244 [2024-11-04 14:48:44.148407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.244 [2024-11-04 14:48:44.148433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.244 [2024-11-04 14:48:44.148439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.244 [2024-11-04 14:48:44.151296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.244 [2024-11-04 14:48:44.151390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.244 [2024-11-04 14:48:44.151398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.154232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.154259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.154264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.157059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.157085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.157090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.159902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.159928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.159933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.162793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.162817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.162823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.165642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.165664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.165670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.168509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.168536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.168542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.171359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.171385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.171390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.174236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.174330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.174338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.177140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.177165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.177171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.179963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.179988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.179994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.182810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.182834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.182840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.185645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.185671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.185677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.188461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.188486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.188492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.191333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.191359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.191365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.194191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.194216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.194221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.197031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.197125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.197132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.199936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.199962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.199968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.202794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.202818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.202824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.205585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.205621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.205626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.208421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.208511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.208518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.211348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.211375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.211380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.214195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.214221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.214226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.217034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.217060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.217066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.219892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.219982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.219989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.222815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.222840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.222846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.225655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.225677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.225682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.245 [2024-11-04 14:48:44.228445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.245 [2024-11-04 14:48:44.228470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.245 [2024-11-04 14:48:44.228476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.231255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.231343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.231351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.234141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.234167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.234172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.236955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.236980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.236986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.239772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.239796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.239802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.242645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.242668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.242674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.245434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.245460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.245465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.248263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.248289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.248295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.251088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.251113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.251119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.253947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.254043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.254051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.256844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.256869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.256875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.259692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.259715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.259720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.262540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.262566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.262572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.265375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.265468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.265475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.268287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.268309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.268316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.271091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.271116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.271122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.273901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.273925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.273931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.276707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.276730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.276736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.279510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.279536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.279542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.282371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.282397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.282403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.285215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.285241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.285247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.288077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.288176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.288183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.291019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.291044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.291050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.293841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.293866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.293871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.296720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.296743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.296750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.299534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.299640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.299647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.302427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.302453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.302459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.246 [2024-11-04 14:48:44.305260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.246 [2024-11-04 14:48:44.305285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-04 14:48:44.305290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.308101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.308126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.308132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.310903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.310994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.311002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.313801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.313825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.313831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.316588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.316621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.316627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.319429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.319454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.319460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.322266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.322356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.322364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.325137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.325159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.325165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.327979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.328004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.328009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.330806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.330830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.330836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.333643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.333665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.333671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.336434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.336460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.336466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.339243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.339269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.339274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.342066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.342090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.342096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.344908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.344996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.345003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.347784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.347808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.347814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.350601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.350636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.350641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.353389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.353414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.353419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.356214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.356304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.356311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.359120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.359145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.359151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.361946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.361971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.361977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.364728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.364751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.364756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.367525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.367624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.367632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.370428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.370454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.370459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.373297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.373323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.373329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.376127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.376153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.376159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.378941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.379033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.379041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.247 [2024-11-04 14:48:44.381863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.247 [2024-11-04 14:48:44.381890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.247 [2024-11-04 14:48:44.381895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.384734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.384758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.384764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.387580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.387618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.387624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.390411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.390498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.390506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.393309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.393334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.393340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.396194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.396220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.396226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.399036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.399061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.399067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.401818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.401907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.401915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.404671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.404694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.404700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.407511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.407536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.407542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.410366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.410391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.410398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.413198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.413286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.413294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.416084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.416105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.416111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.418916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.418941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.418946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.421747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.421769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.421774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.424542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.424644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.424652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.427415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.427442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.427447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.430241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.430266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.430272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.433073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.433098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.433104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.435911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.436000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.436007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.438815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.507 [2024-11-04 14:48:44.438840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-11-04 14:48:44.438845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.507 [2024-11-04 14:48:44.441657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.441680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.441686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.444479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.444505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.444511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.447316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.447406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.447413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.450244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.450270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.450275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.453044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.453069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.453075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.455862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.455887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.455893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.458687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.458709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.458714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.461485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.461510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.461515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.464306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.464331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.464337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.467159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.467184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.467190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.469969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.470057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.470065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.472856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.472881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.472886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.475744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.475767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.475772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.478540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.478567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.478572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.481357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.481446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.481454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.484245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.484271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.484277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.487095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.487121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.487126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.489930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.489955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.489961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.492759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.492782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.492788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.495582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.495619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.495625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.498430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.498455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.498461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.501256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.501281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.501287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.504087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.504175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.504183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.507003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.507029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.507034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.509838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.509863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.509868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.512660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.512681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.508 [2024-11-04 14:48:44.512686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.508 [2024-11-04 14:48:44.515474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.508 [2024-11-04 14:48:44.515561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.515569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.518398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.518424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.518429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.521232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.521257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.521263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.524057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.524082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.524088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.526887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.526975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.526983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.529794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.529818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.529824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.532628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.532650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.532656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.535410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.535436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.535442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.538277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.538365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.538373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.541189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.541216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.541222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.544046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.544072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.544078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.546904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.546930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.546936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.549738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.549761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.549766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.552546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.552571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.552576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.555377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.555402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.555408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.558214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.558239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.558244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.561039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.561129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.561136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.563906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.563931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.563937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.566766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.566789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.566795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.569577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.569601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.569621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.572417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.572506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.572513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.575325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.575352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.575357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.578157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.578184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.578189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.580931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.580955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.580961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.583755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.583777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.583783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.586580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.586618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.586624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.589383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.509 [2024-11-04 14:48:44.589409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.509 [2024-11-04 14:48:44.589415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.509 [2024-11-04 14:48:44.592257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.510 [2024-11-04 14:48:44.592283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.510 [2024-11-04 14:48:44.592289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.510 [2024-11-04 14:48:44.595167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.510 [2024-11-04 14:48:44.595269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.510 [2024-11-04 14:48:44.595276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.510 [2024-11-04 14:48:44.598094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.510 [2024-11-04 14:48:44.598120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.510 [2024-11-04 14:48:44.598126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.510 [2024-11-04 14:48:44.600915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.510 [2024-11-04 14:48:44.600939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.510 [2024-11-04 14:48:44.600945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.510 [2024-11-04 14:48:44.603753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.510 [2024-11-04 14:48:44.603787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.510 [2024-11-04 14:48:44.603793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.510 [2024-11-04 14:48:44.606594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.510 [2024-11-04 14:48:44.606697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.510 [2024-11-04 14:48:44.606704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.510 [2024-11-04 14:48:44.609485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.510 [2024-11-04 14:48:44.609511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.510 [2024-11-04 14:48:44.609516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.510 [2024-11-04 14:48:44.612314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.510 [2024-11-04 14:48:44.612339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.510 [2024-11-04 14:48:44.612345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.510 [2024-11-04 14:48:44.615119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.510 [2024-11-04 14:48:44.615145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.510 [2024-11-04 14:48:44.615150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.510 [2024-11-04 14:48:44.617942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.510 [2024-11-04 14:48:44.618030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.510 [2024-11-04 14:48:44.618038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.510 [2024-11-04 14:48:44.620831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.510 [2024-11-04 14:48:44.620856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.510 [2024-11-04 14:48:44.620861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.510 [2024-11-04 14:48:44.623643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.510 [2024-11-04 14:48:44.623666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.510 [2024-11-04 14:48:44.623672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.510 [2024-11-04 14:48:44.626480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.510 [2024-11-04 14:48:44.626505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.510 [2024-11-04 14:48:44.626511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.510 [2024-11-04 14:48:44.629293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.510 [2024-11-04 14:48:44.629379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.510 [2024-11-04 14:48:44.629387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.510 [2024-11-04 14:48:44.632173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.510 [2024-11-04 14:48:44.632196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.510 [2024-11-04 14:48:44.632201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.510 [2024-11-04 14:48:44.635004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.510 [2024-11-04 14:48:44.635030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.510 [2024-11-04 14:48:44.635035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.510 [2024-11-04 14:48:44.637873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.510 [2024-11-04 14:48:44.637897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.510 [2024-11-04 14:48:44.637903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.510 [2024-11-04 14:48:44.640695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.510 [2024-11-04 14:48:44.640718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.510 [2024-11-04 14:48:44.640724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.510 [2024-11-04 14:48:44.643538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.510 [2024-11-04 14:48:44.643563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.510 [2024-11-04 14:48:44.643569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.768 [2024-11-04 14:48:44.646397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.768 [2024-11-04 14:48:44.646422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.768 [2024-11-04 14:48:44.646428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.768 [2024-11-04 14:48:44.649156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.768 [2024-11-04 14:48:44.649180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.768 [2024-11-04 14:48:44.649186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.768 [2024-11-04 14:48:44.651950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.768 [2024-11-04 14:48:44.652040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.768 [2024-11-04 14:48:44.652048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.768 [2024-11-04 14:48:44.654847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.768 [2024-11-04 14:48:44.654872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.768 [2024-11-04 14:48:44.654878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.768 [2024-11-04 14:48:44.657695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.768 [2024-11-04 14:48:44.657717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.768 [2024-11-04 14:48:44.657722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.768 [2024-11-04 14:48:44.660456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.768 [2024-11-04 14:48:44.660480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.768 [2024-11-04 14:48:44.660486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.768 [2024-11-04 14:48:44.663241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.663329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.663336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.666058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.666082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.666088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.668816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.668840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.668845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.671638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.671661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.671666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.674426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.674513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.674520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.677279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.677304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.677309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.680061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.680086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.680091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.682819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.682843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.682848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.685621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.685642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.685648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.688435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.688460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.688466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.691249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.691275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.691281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.694083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.694109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.694114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.696933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.697027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.697036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.699899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.699925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.699930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.702739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.702761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.702767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.705554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.705589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.705594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.708372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.708461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.708469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.711275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.711297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.711303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.714105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.714130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.714135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.716894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.716919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.716925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.719734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.719757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.719762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.722554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.722580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.722586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.725381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.725406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.725412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.769 [2024-11-04 14:48:44.728242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.728268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.728274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.769 10834.50 IOPS, 1354.31 MiB/s [2024-11-04T14:48:44.909Z] [2024-11-04 14:48:44.732222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1184400) 00:22:35.769 [2024-11-04 14:48:44.732247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.769 [2024-11-04 14:48:44.732252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.769 00:22:35.769 Latency(us) 00:22:35.769 [2024-11-04T14:48:44.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.769 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:35.770 nvme0n1 : 2.00 10834.25 1354.28 0.00 0.00 1474.23 1317.02 6906.49 00:22:35.770 [2024-11-04T14:48:44.910Z] =================================================================================================================== 00:22:35.770 [2024-11-04T14:48:44.910Z] Total : 10834.25 1354.28 0.00 0.00 1474.23 1317.02 6906.49 00:22:35.770 { 00:22:35.770 "results": [ 00:22:35.770 { 00:22:35.770 "job": "nvme0n1", 00:22:35.770 "core_mask": "0x2", 00:22:35.770 "workload": "randread", 00:22:35.770 "status": "finished", 00:22:35.770 "queue_depth": 16, 00:22:35.770 "io_size": 131072, 00:22:35.770 "runtime": 2.003, 00:22:35.770 "iops": 10834.24862705941, 00:22:35.770 "mibps": 1354.2810783824264, 00:22:35.770 "io_failed": 0, 00:22:35.770 "io_timeout": 0, 00:22:35.770 "avg_latency_us": 1474.226643649885, 00:22:35.770 "min_latency_us": 1317.0215384615385, 00:22:35.770 "max_latency_us": 6906.486153846154 00:22:35.770 } 00:22:35.770 ], 00:22:35.770 "core_count": 1 00:22:35.770 } 00:22:35.770 14:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:35.770 14:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:35.770 | .driver_specific 00:22:35.770 | .nvme_error 00:22:35.770 | .status_code 00:22:35.770 | .command_transient_transport_error' 00:22:35.770 14:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:35.770 14:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:36.027 14:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 700 > 0 )) 00:22:36.027 14:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 78802 00:22:36.027 14:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 78802 ']' 00:22:36.027 14:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 78802 00:22:36.027 14:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:22:36.027 14:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:36.027 14:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78802 00:22:36.027 killing process with pid 78802 00:22:36.027 Received shutdown signal, test time was about 2.000000 seconds 00:22:36.027 00:22:36.027 Latency(us) 00:22:36.027 [2024-11-04T14:48:45.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.027 [2024-11-04T14:48:45.167Z] =================================================================================================================== 00:22:36.027 [2024-11-04T14:48:45.167Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:36.027 14:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:36.027 14:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:36.027 14:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78802' 00:22:36.027 14:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 78802 00:22:36.027 14:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 78802 00:22:36.027 14:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:22:36.027 14:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:36.027 14:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:22:36.027 14:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:22:36.027 14:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:22:36.027 14:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=78849 00:22:36.027 14:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:36.027 14:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 78849 /var/tmp/bperf.sock 00:22:36.027 14:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 78849 ']' 00:22:36.027 14:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:36.027 14:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:36.027 14:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:36.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:36.027 14:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:36.027 14:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:36.027 [2024-11-04 14:48:45.111910] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:22:36.027 [2024-11-04 14:48:45.112070] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78849 ] 00:22:36.285 [2024-11-04 14:48:45.246522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.285 [2024-11-04 14:48:45.277709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.285 [2024-11-04 14:48:45.306404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:36.849 14:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:36.849 14:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:22:36.849 14:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:37.106 14:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:37.106 14:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:37.106 14:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.106 14:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:37.106 14:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.106 14:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:37.106 14:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:37.363 nvme0n1 00:22:37.363 14:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:37.363 14:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.363 14:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:37.363 14:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.363 14:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:37.363 14:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:37.620 Running I/O for 2 seconds... 00:22:37.620 [2024-11-04 14:48:46.576759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fef90 00:22:37.620 [2024-11-04 14:48:46.578759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.620 [2024-11-04 14:48:46.578925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.620 [2024-11-04 14:48:46.589119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166feb58 00:22:37.620 [2024-11-04 14:48:46.591003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.620 [2024-11-04 14:48:46.591025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:37.620 [2024-11-04 14:48:46.601339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fe2e8 00:22:37.620 [2024-11-04 14:48:46.603246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.620 [2024-11-04 14:48:46.603267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:37.620 [2024-11-04 14:48:46.613528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fda78 00:22:37.620 [2024-11-04 14:48:46.615433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.620 [2024-11-04 14:48:46.615453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:37.620 [2024-11-04 14:48:46.625562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fd208 00:22:37.620 [2024-11-04 14:48:46.627457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.620 [2024-11-04 14:48:46.627475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:37.620 [2024-11-04 14:48:46.637854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fc998 00:22:37.620 [2024-11-04 14:48:46.639721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.620 [2024-11-04 14:48:46.639742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:37.620 [2024-11-04 14:48:46.650031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fc128 00:22:37.620 [2024-11-04 14:48:46.651876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.620 [2024-11-04 14:48:46.651895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:37.620 [2024-11-04 14:48:46.662239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fb8b8 00:22:37.620 [2024-11-04 14:48:46.664071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.620 [2024-11-04 14:48:46.664091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:37.620 [2024-11-04 14:48:46.674481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fb048 00:22:37.620 [2024-11-04 14:48:46.676292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.620 [2024-11-04 14:48:46.676312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:37.620 [2024-11-04 14:48:46.686729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fa7d8 00:22:37.620 [2024-11-04 14:48:46.688530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.620 [2024-11-04 14:48:46.688549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:37.620 [2024-11-04 14:48:46.698934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f9f68 00:22:37.620 [2024-11-04 14:48:46.700719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.620 [2024-11-04 14:48:46.700738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:37.620 [2024-11-04 14:48:46.711108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f96f8 00:22:37.621 [2024-11-04 14:48:46.712877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.621 [2024-11-04 14:48:46.712895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:37.621 [2024-11-04 14:48:46.723283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f8e88 00:22:37.621 [2024-11-04 14:48:46.725077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.621 [2024-11-04 14:48:46.725097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:37.621 [2024-11-04 14:48:46.735665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f8618 00:22:37.621 [2024-11-04 14:48:46.737405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.621 [2024-11-04 14:48:46.737425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:37.621 [2024-11-04 14:48:46.747833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f7da8 00:22:37.621 [2024-11-04 14:48:46.749555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.621 [2024-11-04 14:48:46.749579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:37.621 [2024-11-04 14:48:46.760115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f7538 00:22:37.878 [2024-11-04 14:48:46.761898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.878 [2024-11-04 14:48:46.761916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:37.878 [2024-11-04 14:48:46.772340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f6cc8 00:22:37.878 [2024-11-04 14:48:46.774050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.878 [2024-11-04 14:48:46.774068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.878 [2024-11-04 14:48:46.784514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f6458 00:22:37.878 [2024-11-04 14:48:46.786212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.878 [2024-11-04 14:48:46.786230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:37.878 [2024-11-04 14:48:46.796728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f5be8 00:22:37.878 [2024-11-04 14:48:46.798402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.878 [2024-11-04 14:48:46.798422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:37.878 [2024-11-04 14:48:46.808928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f5378 00:22:37.878 [2024-11-04 14:48:46.810591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.878 [2024-11-04 14:48:46.810616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:37.878 [2024-11-04 14:48:46.821057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f4b08 00:22:37.878 [2024-11-04 14:48:46.822703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.878 [2024-11-04 14:48:46.822722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:37.878 [2024-11-04 14:48:46.832995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f4298 00:22:37.878 [2024-11-04 14:48:46.834603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.878 [2024-11-04 14:48:46.834632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:37.878 [2024-11-04 14:48:46.844900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f3a28 00:22:37.878 [2024-11-04 14:48:46.846469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.878 [2024-11-04 14:48:46.846489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:37.878 [2024-11-04 14:48:46.856711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f31b8 00:22:37.878 [2024-11-04 14:48:46.858262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.878 [2024-11-04 14:48:46.858281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:37.878 [2024-11-04 14:48:46.868574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f2948 00:22:37.878 [2024-11-04 14:48:46.870162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.878 [2024-11-04 14:48:46.870180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:37.878 [2024-11-04 14:48:46.880762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f20d8 00:22:37.879 [2024-11-04 14:48:46.882333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.879 [2024-11-04 14:48:46.882352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:37.879 [2024-11-04 14:48:46.893161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f1868 00:22:37.879 [2024-11-04 14:48:46.894722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.879 [2024-11-04 14:48:46.894741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:37.879 [2024-11-04 14:48:46.905353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f0ff8 00:22:37.879 [2024-11-04 14:48:46.906896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.879 [2024-11-04 14:48:46.906916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:37.879 [2024-11-04 14:48:46.917521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f0788 00:22:37.879 [2024-11-04 14:48:46.919043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.879 [2024-11-04 14:48:46.919063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:37.879 [2024-11-04 14:48:46.929711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166eff18 00:22:37.879 [2024-11-04 14:48:46.931205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.879 [2024-11-04 14:48:46.931223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:37.879 [2024-11-04 14:48:46.942007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166ef6a8 00:22:37.879 [2024-11-04 14:48:46.943529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.879 [2024-11-04 14:48:46.943549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:37.879 [2024-11-04 14:48:46.954183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166eee38 00:22:37.879 [2024-11-04 14:48:46.955618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.879 [2024-11-04 14:48:46.955636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:37.879 [2024-11-04 14:48:46.966166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166ee5c8 00:22:37.879 [2024-11-04 14:48:46.967621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.879 [2024-11-04 14:48:46.967639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.879 [2024-11-04 14:48:46.978169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166edd58 00:22:37.879 [2024-11-04 14:48:46.979591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.879 [2024-11-04 14:48:46.979615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:37.879 [2024-11-04 14:48:46.990414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166ed4e8 00:22:37.879 [2024-11-04 14:48:46.991838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.879 [2024-11-04 14:48:46.991856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:37.879 [2024-11-04 14:48:47.002602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166ecc78 00:22:37.879 [2024-11-04 14:48:47.004000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.879 [2024-11-04 14:48:47.004019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:37.879 [2024-11-04 14:48:47.014517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166ec408 00:22:37.879 [2024-11-04 14:48:47.015930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.879 [2024-11-04 14:48:47.015949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:38.137 [2024-11-04 14:48:47.026855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166ebb98 00:22:38.137 [2024-11-04 14:48:47.028226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.137 [2024-11-04 14:48:47.028245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:38.137 [2024-11-04 14:48:47.038939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166eb328 00:22:38.137 [2024-11-04 14:48:47.040256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.137 [2024-11-04 14:48:47.040275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:38.137 [2024-11-04 14:48:47.050790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166eaab8 00:22:38.137 [2024-11-04 14:48:47.052094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.137 [2024-11-04 14:48:47.052112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:38.137 [2024-11-04 14:48:47.062804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166ea248 00:22:38.137 [2024-11-04 14:48:47.064123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.137 [2024-11-04 14:48:47.064143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:38.137 [2024-11-04 14:48:47.074973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e99d8 00:22:38.137 [2024-11-04 14:48:47.076279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.137 [2024-11-04 14:48:47.076298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:38.137 [2024-11-04 14:48:47.087158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e9168 00:22:38.137 [2024-11-04 14:48:47.088498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.137 [2024-11-04 14:48:47.088517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:38.137 [2024-11-04 14:48:47.099395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e88f8 00:22:38.137 [2024-11-04 14:48:47.100693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.137 [2024-11-04 14:48:47.100711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:38.137 [2024-11-04 14:48:47.111618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e8088 00:22:38.137 [2024-11-04 14:48:47.112881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.137 [2024-11-04 14:48:47.112899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:38.137 [2024-11-04 14:48:47.123803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e7818 00:22:38.137 [2024-11-04 14:48:47.125050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.137 [2024-11-04 14:48:47.125068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:38.137 [2024-11-04 14:48:47.135983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e6fa8 00:22:38.137 [2024-11-04 14:48:47.137217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.137 [2024-11-04 14:48:47.137235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:38.137 [2024-11-04 14:48:47.148270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e6738 00:22:38.137 [2024-11-04 14:48:47.149531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.137 [2024-11-04 14:48:47.149549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:38.137 [2024-11-04 14:48:47.160489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e5ec8 00:22:38.137 [2024-11-04 14:48:47.161711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.137 [2024-11-04 14:48:47.161729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.137 [2024-11-04 14:48:47.172729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e5658 00:22:38.137 [2024-11-04 14:48:47.173929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.137 [2024-11-04 14:48:47.173947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:38.137 [2024-11-04 14:48:47.184898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e4de8 00:22:38.137 [2024-11-04 14:48:47.186081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.137 [2024-11-04 14:48:47.186098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:38.137 [2024-11-04 14:48:47.197075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e4578 00:22:38.137 [2024-11-04 14:48:47.198260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.137 [2024-11-04 14:48:47.198278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:38.137 [2024-11-04 14:48:47.209383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e3d08 00:22:38.137 [2024-11-04 14:48:47.210536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.137 [2024-11-04 14:48:47.210555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:38.137 [2024-11-04 14:48:47.221595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e3498 00:22:38.137 [2024-11-04 14:48:47.222732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.137 [2024-11-04 14:48:47.222750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:38.137 [2024-11-04 14:48:47.233786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e2c28 00:22:38.137 [2024-11-04 14:48:47.234901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.137 [2024-11-04 14:48:47.234919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:38.138 [2024-11-04 14:48:47.246069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e23b8 00:22:38.138 [2024-11-04 14:48:47.247169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.138 [2024-11-04 14:48:47.247187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:38.138 [2024-11-04 14:48:47.258243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e1b48 00:22:38.138 [2024-11-04 14:48:47.259327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.138 [2024-11-04 14:48:47.259344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:38.138 [2024-11-04 14:48:47.270414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e12d8 00:22:38.138 [2024-11-04 14:48:47.271508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.138 [2024-11-04 14:48:47.271527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.282859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e0a68 00:22:38.395 [2024-11-04 14:48:47.283918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.283936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.295055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e01f8 00:22:38.395 [2024-11-04 14:48:47.296094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.296112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.307244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166df988 00:22:38.395 [2024-11-04 14:48:47.308275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.308293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.319413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166df118 00:22:38.395 [2024-11-04 14:48:47.320420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.320439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.331615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166de8a8 00:22:38.395 [2024-11-04 14:48:47.332612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.332631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.343778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166de038 00:22:38.395 [2024-11-04 14:48:47.344758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.344776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.361117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166de038 00:22:38.395 [2024-11-04 14:48:47.363062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.363080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.373284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166de8a8 00:22:38.395 [2024-11-04 14:48:47.375210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.375228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.385430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166df118 00:22:38.395 [2024-11-04 14:48:47.387339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.387357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.397556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166df988 00:22:38.395 [2024-11-04 14:48:47.399405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.399423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.409778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e01f8 00:22:38.395 [2024-11-04 14:48:47.411653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.411680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.421814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e0a68 00:22:38.395 [2024-11-04 14:48:47.423651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.423670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.433935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e12d8 00:22:38.395 [2024-11-04 14:48:47.435771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.435790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.446122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e1b48 00:22:38.395 [2024-11-04 14:48:47.447921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.447941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.458089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e23b8 00:22:38.395 [2024-11-04 14:48:47.459844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.459862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.470158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e2c28 00:22:38.395 [2024-11-04 14:48:47.471948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.471967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.482348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e3498 00:22:38.395 [2024-11-04 14:48:47.484127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.484145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.494525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e3d08 00:22:38.395 [2024-11-04 14:48:47.496286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.496305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.506780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e4578 00:22:38.395 [2024-11-04 14:48:47.508579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.508598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.519039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e4de8 00:22:38.395 [2024-11-04 14:48:47.520770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.520789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:38.395 [2024-11-04 14:48:47.531297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e5658 00:22:38.395 [2024-11-04 14:48:47.533077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.395 [2024-11-04 14:48:47.533095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.543617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e5ec8 00:22:38.654 [2024-11-04 14:48:47.545313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.545332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.556077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e6738 00:22:38.654 [2024-11-04 14:48:47.558518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.558540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:38.654 20622.00 IOPS, 80.55 MiB/s [2024-11-04T14:48:47.794Z] [2024-11-04 14:48:47.569232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e6fa8 00:22:38.654 [2024-11-04 14:48:47.570919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.570938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.581420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e7818 00:22:38.654 [2024-11-04 14:48:47.583086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.583105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.593599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e8088 00:22:38.654 [2024-11-04 14:48:47.595245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.595263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.605476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e88f8 00:22:38.654 [2024-11-04 14:48:47.607067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.607085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.617613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e9168 00:22:38.654 [2024-11-04 14:48:47.619242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.619261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.629749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166e99d8 00:22:38.654 [2024-11-04 14:48:47.631294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.631313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.641580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166ea248 00:22:38.654 [2024-11-04 14:48:47.643117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.643136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.653378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166eaab8 00:22:38.654 [2024-11-04 14:48:47.654908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.654925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.665188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166eb328 00:22:38.654 [2024-11-04 14:48:47.666703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.666722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.677005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166ebb98 00:22:38.654 [2024-11-04 14:48:47.678529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.678548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.689135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166ec408 00:22:38.654 [2024-11-04 14:48:47.690630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.690648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.701277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166ecc78 00:22:38.654 [2024-11-04 14:48:47.702780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.702799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.713524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166ed4e8 00:22:38.654 [2024-11-04 14:48:47.715058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.715076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.725749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166edd58 00:22:38.654 [2024-11-04 14:48:47.727219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.727238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.737909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166ee5c8 00:22:38.654 [2024-11-04 14:48:47.739363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.739382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.750067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166eee38 00:22:38.654 [2024-11-04 14:48:47.751505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.751523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.762284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166ef6a8 00:22:38.654 [2024-11-04 14:48:47.763754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.763773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.774537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166eff18 00:22:38.654 [2024-11-04 14:48:47.775953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.654 [2024-11-04 14:48:47.775971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:38.654 [2024-11-04 14:48:47.786740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f0788 00:22:38.655 [2024-11-04 14:48:47.788143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.655 [2024-11-04 14:48:47.788161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:38.913 [2024-11-04 14:48:47.798853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f0ff8 00:22:38.913 [2024-11-04 14:48:47.800223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.913 [2024-11-04 14:48:47.800241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:38.913 [2024-11-04 14:48:47.811003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f1868 00:22:38.913 [2024-11-04 14:48:47.812346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.913 [2024-11-04 14:48:47.812363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:38.913 [2024-11-04 14:48:47.823167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f20d8 00:22:38.913 [2024-11-04 14:48:47.824511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.913 [2024-11-04 14:48:47.824529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:38.913 [2024-11-04 14:48:47.835347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f2948 00:22:38.913 [2024-11-04 14:48:47.836681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.913 [2024-11-04 14:48:47.836699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:38.913 [2024-11-04 14:48:47.847484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f31b8 00:22:38.913 [2024-11-04 14:48:47.848790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.913 [2024-11-04 14:48:47.848809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:38.913 [2024-11-04 14:48:47.859585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f3a28 00:22:38.913 [2024-11-04 14:48:47.860856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.913 [2024-11-04 14:48:47.860874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:38.913 [2024-11-04 14:48:47.871383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f4298 00:22:38.913 [2024-11-04 14:48:47.872635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.913 [2024-11-04 14:48:47.872652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:38.913 [2024-11-04 14:48:47.883317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f4b08 00:22:38.913 [2024-11-04 14:48:47.884589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.913 [2024-11-04 14:48:47.884614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:38.913 [2024-11-04 14:48:47.895463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f5378 00:22:38.913 [2024-11-04 14:48:47.896720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.913 [2024-11-04 14:48:47.896738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:38.913 [2024-11-04 14:48:47.907593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f5be8 00:22:38.913 [2024-11-04 14:48:47.908869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.913 [2024-11-04 14:48:47.908887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:38.913 [2024-11-04 14:48:47.919940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f6458 00:22:38.913 [2024-11-04 14:48:47.921170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.913 [2024-11-04 14:48:47.921188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:38.913 [2024-11-04 14:48:47.932202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f6cc8 00:22:38.913 [2024-11-04 14:48:47.933426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.913 [2024-11-04 14:48:47.933445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.913 [2024-11-04 14:48:47.944388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f7538 00:22:38.913 [2024-11-04 14:48:47.945593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.913 [2024-11-04 14:48:47.945621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:38.913 [2024-11-04 14:48:47.956532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f7da8 00:22:38.913 [2024-11-04 14:48:47.957708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.913 [2024-11-04 14:48:47.957726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:38.913 [2024-11-04 14:48:47.968747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f8618 00:22:38.913 [2024-11-04 14:48:47.969928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.913 [2024-11-04 14:48:47.969947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:38.914 [2024-11-04 14:48:47.980943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f8e88 00:22:38.914 [2024-11-04 14:48:47.982105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.914 [2024-11-04 14:48:47.982123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:38.914 [2024-11-04 14:48:47.993072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f96f8 00:22:38.914 [2024-11-04 14:48:47.994213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.914 [2024-11-04 14:48:47.994231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:38.914 [2024-11-04 14:48:48.005254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f9f68 00:22:38.914 [2024-11-04 14:48:48.006383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.914 [2024-11-04 14:48:48.006402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:38.914 [2024-11-04 14:48:48.017398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fa7d8 00:22:38.914 [2024-11-04 14:48:48.018488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.914 [2024-11-04 14:48:48.018506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:38.914 [2024-11-04 14:48:48.029655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fb048 00:22:38.914 [2024-11-04 14:48:48.030750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.914 [2024-11-04 14:48:48.030769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:38.914 [2024-11-04 14:48:48.041817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fb8b8 00:22:38.914 [2024-11-04 14:48:48.042913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.914 [2024-11-04 14:48:48.042932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:39.171 [2024-11-04 14:48:48.054185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fc128 00:22:39.171 [2024-11-04 14:48:48.055245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.171 [2024-11-04 14:48:48.055264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:39.171 [2024-11-04 14:48:48.066372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fc998 00:22:39.171 [2024-11-04 14:48:48.067420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.171 [2024-11-04 14:48:48.067439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:39.171 [2024-11-04 14:48:48.078662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fd208 00:22:39.171 [2024-11-04 14:48:48.079698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.171 [2024-11-04 14:48:48.079716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:39.171 [2024-11-04 14:48:48.091212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fda78 00:22:39.171 [2024-11-04 14:48:48.092230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.171 [2024-11-04 14:48:48.092248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:39.171 [2024-11-04 14:48:48.103401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fe2e8 00:22:39.171 [2024-11-04 14:48:48.104404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.171 [2024-11-04 14:48:48.104422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:39.171 [2024-11-04 14:48:48.115550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166feb58 00:22:39.171 [2024-11-04 14:48:48.116526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.171 [2024-11-04 14:48:48.116544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:39.171 [2024-11-04 14:48:48.132790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fef90 00:22:39.171 [2024-11-04 14:48:48.134732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.171 [2024-11-04 14:48:48.134751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.171 [2024-11-04 14:48:48.144947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166feb58 00:22:39.171 [2024-11-04 14:48:48.146877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.172 [2024-11-04 14:48:48.146896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:39.172 [2024-11-04 14:48:48.157103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fe2e8 00:22:39.172 [2024-11-04 14:48:48.158973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.172 [2024-11-04 14:48:48.158991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:39.172 [2024-11-04 14:48:48.169056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fda78 00:22:39.172 [2024-11-04 14:48:48.170941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.172 [2024-11-04 14:48:48.170959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:39.172 [2024-11-04 14:48:48.181040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fd208 00:22:39.172 [2024-11-04 14:48:48.182870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.172 [2024-11-04 14:48:48.182887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:39.172 [2024-11-04 14:48:48.192876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fc998 00:22:39.172 [2024-11-04 14:48:48.194678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.172 [2024-11-04 14:48:48.194696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:39.172 [2024-11-04 14:48:48.204840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fc128 00:22:39.172 [2024-11-04 14:48:48.206642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.172 [2024-11-04 14:48:48.206660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:39.172 [2024-11-04 14:48:48.216652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fb8b8 00:22:39.172 [2024-11-04 14:48:48.218427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.172 [2024-11-04 14:48:48.218443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:39.172 [2024-11-04 14:48:48.228635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fb048 00:22:39.172 [2024-11-04 14:48:48.230450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.172 [2024-11-04 14:48:48.230469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:39.172 [2024-11-04 14:48:48.240847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166fa7d8 00:22:39.172 [2024-11-04 14:48:48.242679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.172 [2024-11-04 14:48:48.242697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:39.172 [2024-11-04 14:48:48.253151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f9f68 00:22:39.172 [2024-11-04 14:48:48.254943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.172 [2024-11-04 14:48:48.254960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:39.172 [2024-11-04 14:48:48.265342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f96f8 00:22:39.172 [2024-11-04 14:48:48.267119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.172 [2024-11-04 14:48:48.267137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:39.172 [2024-11-04 14:48:48.277582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f8e88 00:22:39.172 [2024-11-04 14:48:48.279381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.172 [2024-11-04 14:48:48.279399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:39.172 [2024-11-04 14:48:48.289815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f8618 00:22:39.172 [2024-11-04 14:48:48.291549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.172 [2024-11-04 14:48:48.291567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:39.172 [2024-11-04 14:48:48.301978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f7da8 00:22:39.172 [2024-11-04 14:48:48.303732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.172 [2024-11-04 14:48:48.303750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:39.429 [2024-11-04 14:48:48.314267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f7538 00:22:39.429 [2024-11-04 14:48:48.315986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.429 [2024-11-04 14:48:48.316004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:39.429 [2024-11-04 14:48:48.326513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f6cc8 00:22:39.429 [2024-11-04 14:48:48.328249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.429 [2024-11-04 14:48:48.328267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.429 [2024-11-04 14:48:48.338758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f6458 00:22:39.429 [2024-11-04 14:48:48.340431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.429 [2024-11-04 14:48:48.340449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:39.429 [2024-11-04 14:48:48.350936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f5be8 00:22:39.429 [2024-11-04 14:48:48.352597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.429 [2024-11-04 14:48:48.352621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:39.429 [2024-11-04 14:48:48.363187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f5378 00:22:39.429 [2024-11-04 14:48:48.364869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.429 [2024-11-04 14:48:48.364887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:39.429 [2024-11-04 14:48:48.375399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f4b08 00:22:39.429 [2024-11-04 14:48:48.376996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.429 [2024-11-04 14:48:48.377014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:39.429 [2024-11-04 14:48:48.387363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f4298 00:22:39.430 [2024-11-04 14:48:48.388959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.430 [2024-11-04 14:48:48.388978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:39.430 [2024-11-04 14:48:48.399327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f3a28 00:22:39.430 [2024-11-04 14:48:48.400891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.430 [2024-11-04 14:48:48.400910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:39.430 [2024-11-04 14:48:48.411195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f31b8 00:22:39.430 [2024-11-04 14:48:48.412741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.430 [2024-11-04 14:48:48.412760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:39.430 [2024-11-04 14:48:48.423015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f2948 00:22:39.430 [2024-11-04 14:48:48.424541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.430 [2024-11-04 14:48:48.424560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:39.430 [2024-11-04 14:48:48.434847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f20d8 00:22:39.430 [2024-11-04 14:48:48.436359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.430 [2024-11-04 14:48:48.436378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:39.430 [2024-11-04 14:48:48.446683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f1868 00:22:39.430 [2024-11-04 14:48:48.448175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.430 [2024-11-04 14:48:48.448194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:39.430 [2024-11-04 14:48:48.458506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f0ff8 00:22:39.430 [2024-11-04 14:48:48.459987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.430 [2024-11-04 14:48:48.460005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:39.430 [2024-11-04 14:48:48.470320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166f0788 00:22:39.430 [2024-11-04 14:48:48.471791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.430 [2024-11-04 14:48:48.471810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:39.430 [2024-11-04 14:48:48.482213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166eff18 00:22:39.430 [2024-11-04 14:48:48.483700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.430 [2024-11-04 14:48:48.483719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:39.430 [2024-11-04 14:48:48.494090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166ef6a8 00:22:39.430 [2024-11-04 14:48:48.495528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.430 [2024-11-04 14:48:48.495546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:39.430 [2024-11-04 14:48:48.505914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166eee38 00:22:39.430 [2024-11-04 14:48:48.507333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.430 [2024-11-04 14:48:48.507351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:39.430 [2024-11-04 14:48:48.517732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166ee5c8 00:22:39.430 [2024-11-04 14:48:48.519141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.430 [2024-11-04 14:48:48.519159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.430 [2024-11-04 14:48:48.529567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166edd58 00:22:39.430 [2024-11-04 14:48:48.531047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.430 [2024-11-04 14:48:48.531066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:39.430 [2024-11-04 14:48:48.541891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166ed4e8 00:22:39.430 [2024-11-04 14:48:48.543311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.430 [2024-11-04 14:48:48.543329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:39.430 [2024-11-04 14:48:48.554058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0750) with pdu=0x2000166ecc78 00:22:39.430 [2024-11-04 14:48:48.555459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.430 [2024-11-04 14:48:48.555478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:39.430 20810.50 IOPS, 81.29 MiB/s 00:22:39.430 Latency(us) 00:22:39.430 [2024-11-04T14:48:48.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.430 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:39.430 nvme0n1 : 2.01 20807.43 81.28 0.00 0.00 6147.33 5116.85 23492.14 00:22:39.430 [2024-11-04T14:48:48.570Z] =================================================================================================================== 00:22:39.430 [2024-11-04T14:48:48.570Z] Total : 20807.43 81.28 0.00 0.00 6147.33 5116.85 23492.14 00:22:39.430 { 00:22:39.430 "results": [ 00:22:39.430 { 00:22:39.430 "job": "nvme0n1", 00:22:39.430 "core_mask": "0x2", 00:22:39.430 "workload": "randwrite", 00:22:39.430 "status": "finished", 00:22:39.430 "queue_depth": 128, 00:22:39.430 "io_size": 4096, 00:22:39.430 "runtime": 2.006447, 00:22:39.430 "iops": 20807.427258233085, 00:22:39.430 "mibps": 81.27901272747299, 00:22:39.430 "io_failed": 0, 00:22:39.430 "io_timeout": 0, 00:22:39.430 "avg_latency_us": 6147.331971102025, 00:22:39.430 "min_latency_us": 5116.84923076923, 00:22:39.430 "max_latency_us": 23492.135384615383 00:22:39.430 } 00:22:39.430 ], 00:22:39.430 "core_count": 1 00:22:39.430 } 00:22:39.687 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:39.687 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:39.688 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:39.688 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:39.688 | .driver_specific 00:22:39.688 | .nvme_error 00:22:39.688 | .status_code 00:22:39.688 | .command_transient_transport_error' 00:22:39.688 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:22:39.688 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 78849 00:22:39.688 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 78849 ']' 00:22:39.688 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 78849 00:22:39.688 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:22:39.688 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:39.688 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78849 00:22:39.688 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:39.688 killing process with pid 78849 00:22:39.688 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:39.688 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78849' 00:22:39.688 Received shutdown signal, test time was about 2.000000 seconds 00:22:39.688 00:22:39.688 Latency(us) 00:22:39.688 [2024-11-04T14:48:48.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.688 [2024-11-04T14:48:48.828Z] =================================================================================================================== 00:22:39.688 [2024-11-04T14:48:48.828Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.688 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 78849 00:22:39.688 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 78849 00:22:39.945 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:22:39.945 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:39.945 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:22:39.945 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:22:39.945 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:22:39.945 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=78904 00:22:39.945 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 78904 /var/tmp/bperf.sock 00:22:39.945 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 78904 ']' 00:22:39.945 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:39.945 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:39.945 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:39.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:39.945 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:39.945 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:39.945 14:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:39.945 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:39.945 Zero copy mechanism will not be used. 00:22:39.945 [2024-11-04 14:48:48.953128] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:22:39.945 [2024-11-04 14:48:48.953186] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78904 ] 00:22:39.945 [2024-11-04 14:48:49.079260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.202 [2024-11-04 14:48:49.110549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.202 [2024-11-04 14:48:49.138887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:40.765 14:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:40.765 14:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:22:40.765 14:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:40.765 14:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:41.022 14:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:41.022 14:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.022 14:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:41.022 14:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.022 14:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:41.022 14:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:41.280 nvme0n1 00:22:41.280 14:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:41.280 14:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.280 14:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:41.280 14:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.280 14:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:41.280 14:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:41.280 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:41.280 Zero copy mechanism will not be used. 00:22:41.280 Running I/O for 2 seconds... 00:22:41.280 [2024-11-04 14:48:50.349826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.350036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.350059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.352841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.353040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.353061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.355835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.356030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.356055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.358850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.359046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.359065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.361832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.362028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.362055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.364820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.365017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.365039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.367828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.368024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.368047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.370832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.371029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.371051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.373827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.374021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.374042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.376818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.377013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.377034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.379784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.379982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.380003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.382758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.382953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.382974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.385725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.385922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.385941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.388724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.388919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.388937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.391689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.391884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.391903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.394655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.394850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.394865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.397647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.397842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.397861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.400644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.400840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.400861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.403630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.403827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.403850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.406602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.406809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.406830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.280 [2024-11-04 14:48:50.409552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.280 [2024-11-04 14:48:50.409773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.280 [2024-11-04 14:48:50.409788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.281 [2024-11-04 14:48:50.412504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.281 [2024-11-04 14:48:50.412709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.281 [2024-11-04 14:48:50.412724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.281 [2024-11-04 14:48:50.415472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.281 [2024-11-04 14:48:50.415679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.281 [2024-11-04 14:48:50.415700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.281 [2024-11-04 14:48:50.418483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.281 [2024-11-04 14:48:50.418691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.281 [2024-11-04 14:48:50.418712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.539 [2024-11-04 14:48:50.421454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.539 [2024-11-04 14:48:50.421673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.539 [2024-11-04 14:48:50.421693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.539 [2024-11-04 14:48:50.424429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.424635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.424654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.427399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.427595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.427628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.430364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.430561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.430583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.433367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.433563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.433592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.436378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.436575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.436597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.439396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.439592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.439619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.442385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.442580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.442602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.445356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.445555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.445582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.448354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.448552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.448574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.451339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.451535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.451569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.454338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.454533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.454563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.457338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.457537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.457556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.460319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.460517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.460539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.463318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.463519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.463540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.466296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.466491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.466513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.469311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.469507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.469525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.472292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.472489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.472510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.475280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.475477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.475498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.478273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.478466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.478490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.481238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.481434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.481456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.484230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.484427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.484449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.487247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.487444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.487466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.490255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.490450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.490472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.493229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.493423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.493445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.496210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.496406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.540 [2024-11-04 14:48:50.496427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.540 [2024-11-04 14:48:50.499196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.540 [2024-11-04 14:48:50.499391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.499412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.502180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.502375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.502396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.505152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.505346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.505368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.508176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.508372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.508393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.511161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.511355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.511376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.514164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.514359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.514380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.517158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.517354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.517376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.520137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.520331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.520353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.523117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.523312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.523333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.526107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.526302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.526323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.529108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.529304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.529325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.532131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.532329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.532349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.535104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.535300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.535321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.538075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.538269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.538290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.541068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.541255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.541268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.544036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.544230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.544252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.547043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.547237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.547258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.550019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.550215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.550235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.553014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.553209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.553230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.556009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.556203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.556224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.558989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.559184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.559205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.561951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.562145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.562166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.564901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.565095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.565116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.567869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.568067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.568091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.570867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.571068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.571092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.573863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.574060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.541 [2024-11-04 14:48:50.574081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.541 [2024-11-04 14:48:50.576848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.541 [2024-11-04 14:48:50.577046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.577068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.579865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.580064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.580088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.582873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.583068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.583088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.585859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.586054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.586068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.588834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.589032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.589055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.591838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.592035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.592058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.594819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.595013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.595035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.597797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.597992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.598013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.600798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.600995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.601013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.603814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.604010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.604031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.606801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.606999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.607020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.609840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.610038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.610059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.612829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.613025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.613047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.615816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.616010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.616031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.618822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.619020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.619041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.621801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.621998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.622019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.624786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.624983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.625004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.627780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.627976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.627997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.630768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.630964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.630983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.633750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.633946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.633967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.636679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.636874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.636897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.639636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.639828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.639848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.642529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.642737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.642752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.645494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.645710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.645727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.648474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.648680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.648700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.651472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.651680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.651705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.654449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.542 [2024-11-04 14:48:50.654658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.542 [2024-11-04 14:48:50.654678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.542 [2024-11-04 14:48:50.657456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.543 [2024-11-04 14:48:50.657673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.543 [2024-11-04 14:48:50.657690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.543 [2024-11-04 14:48:50.660455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.543 [2024-11-04 14:48:50.660666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.543 [2024-11-04 14:48:50.660685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.543 [2024-11-04 14:48:50.663460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.543 [2024-11-04 14:48:50.663671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.543 [2024-11-04 14:48:50.663691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.543 [2024-11-04 14:48:50.666476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.543 [2024-11-04 14:48:50.666687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.543 [2024-11-04 14:48:50.666707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.543 [2024-11-04 14:48:50.669465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.543 [2024-11-04 14:48:50.669679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.543 [2024-11-04 14:48:50.669694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.543 [2024-11-04 14:48:50.672449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.543 [2024-11-04 14:48:50.672657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.543 [2024-11-04 14:48:50.672676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.543 [2024-11-04 14:48:50.675456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.543 [2024-11-04 14:48:50.675665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.543 [2024-11-04 14:48:50.675685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.802 [2024-11-04 14:48:50.678454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.802 [2024-11-04 14:48:50.678666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.802 [2024-11-04 14:48:50.678685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.802 [2024-11-04 14:48:50.681446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.802 [2024-11-04 14:48:50.681660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.802 [2024-11-04 14:48:50.681680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.802 [2024-11-04 14:48:50.684444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.802 [2024-11-04 14:48:50.684650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.802 [2024-11-04 14:48:50.684670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.802 [2024-11-04 14:48:50.687420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.802 [2024-11-04 14:48:50.687630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.802 [2024-11-04 14:48:50.687649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.802 [2024-11-04 14:48:50.690408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.802 [2024-11-04 14:48:50.690616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.802 [2024-11-04 14:48:50.690635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.802 [2024-11-04 14:48:50.693419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.802 [2024-11-04 14:48:50.693635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.802 [2024-11-04 14:48:50.693652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.802 [2024-11-04 14:48:50.696406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.802 [2024-11-04 14:48:50.696614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.802 [2024-11-04 14:48:50.696634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.802 [2024-11-04 14:48:50.699407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.802 [2024-11-04 14:48:50.699615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.802 [2024-11-04 14:48:50.699635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.802 [2024-11-04 14:48:50.702394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.802 [2024-11-04 14:48:50.702591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.802 [2024-11-04 14:48:50.702626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.802 [2024-11-04 14:48:50.705368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.802 [2024-11-04 14:48:50.705567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.802 [2024-11-04 14:48:50.705593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.802 [2024-11-04 14:48:50.708369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.802 [2024-11-04 14:48:50.708564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.802 [2024-11-04 14:48:50.708585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.802 [2024-11-04 14:48:50.711358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.802 [2024-11-04 14:48:50.711557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.802 [2024-11-04 14:48:50.711578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.802 [2024-11-04 14:48:50.714336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.802 [2024-11-04 14:48:50.714531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.802 [2024-11-04 14:48:50.714562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.717354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.717553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.717574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.720327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.720522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.720543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.723303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.723502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.723537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.726314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.726510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.726531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.729316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.729512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.729533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.732313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.732509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.732542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.735335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.735531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.735568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.738335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.738533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.738555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.741323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.741518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.741540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.744294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.744497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.744517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.747278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.747476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.747498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.750266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.750463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.750484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.753241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.753439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.753460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.756225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.756420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.756441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.759211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.759409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.759430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.762202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.762397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.762419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.765203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.765400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.765421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.768201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.768398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.768419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.771189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.771388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.771409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.774197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.774394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.774415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.777151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.777349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.777370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.780133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.780328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.780354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.783131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.783326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.783347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.786099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.786293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.786315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.803 [2024-11-04 14:48:50.789072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.803 [2024-11-04 14:48:50.789267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.803 [2024-11-04 14:48:50.789288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.792042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.792237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.792258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.795030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.795225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.795238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.798007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.798203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.798222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.800965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.801159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.801180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.803953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.804151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.804172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.806916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.807113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.807132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.809936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.810132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.810152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.812917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.813115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.813136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.815877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.816075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.816093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.818843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.819039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.819058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.821830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.822025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.822064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.824820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.825014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.825034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.827793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.827988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.828008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.830777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.830972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.830992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.833776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.833973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.833988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.836743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.836938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.836958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.839780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.839979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.840000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.842770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.842968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.842988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.845764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.845958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.845982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.848743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.848938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.848953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.851720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.851915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.851934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.854718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.854912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.854933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.857696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.857889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.857926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.860690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.860885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.860905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.863678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.863872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.863895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.866686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.804 [2024-11-04 14:48:50.866880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.804 [2024-11-04 14:48:50.866898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.804 [2024-11-04 14:48:50.869690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.869883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.869901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.872692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.872887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.872906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.875616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.875812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.875829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.878516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.878718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.878736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.881460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.881684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.881701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.884482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.884692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.884709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.887476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.887683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.887700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.890448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.890655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.890672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.893347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.893541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.893559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.896251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.896445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.896466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.899135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.899329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.899344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.902029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.902220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.902238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.904944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.905136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.905154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.907894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.908084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.908102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.910808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.910998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.911016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.913706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.913896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.913914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.916587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.916792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.916811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.919478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.919678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.919699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.922387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.922579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.922597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.925285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.925479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.925497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.928230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.928422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.928443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.931132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.931323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.931344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.934051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.934244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.934265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.936965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.937161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.937182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.805 [2024-11-04 14:48:50.939968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:41.805 [2024-11-04 14:48:50.940166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.805 [2024-11-04 14:48:50.940187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.065 [2024-11-04 14:48:50.942962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.065 [2024-11-04 14:48:50.943158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.065 [2024-11-04 14:48:50.943180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.065 [2024-11-04 14:48:50.945924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.065 [2024-11-04 14:48:50.946120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.065 [2024-11-04 14:48:50.946141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.065 [2024-11-04 14:48:50.948877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:50.949067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:50.949085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:50.951859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:50.952053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:50.952067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:50.954830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:50.955024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:50.955043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:50.957792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:50.957985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:50.957999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:50.960771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:50.960967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:50.960988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:50.963781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:50.963979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:50.964000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:50.966756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:50.966952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:50.966970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:50.969714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:50.969908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:50.969934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:50.972710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:50.972905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:50.972926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:50.975720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:50.975915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:50.975932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:50.978704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:50.978901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:50.978919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:50.981652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:50.981848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:50.981870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:50.984639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:50.984833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:50.984853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:50.987626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:50.987820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:50.987838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:50.990624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:50.990820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:50.990839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:50.993645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:50.993839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:50.993861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:50.996618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:50.996814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:50.996834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:50.999632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:50.999830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:50.999848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:51.002623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:51.002820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:51.002839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:51.005628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:51.005823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:51.005844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:51.008601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:51.008810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:51.008831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:51.011618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:51.011813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:51.011853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:51.014622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:51.014818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:51.014839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:51.017592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:51.017797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:51.017817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:51.020560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:51.020771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:51.020790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:51.023548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:51.023755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.066 [2024-11-04 14:48:51.023774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.066 [2024-11-04 14:48:51.026569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.066 [2024-11-04 14:48:51.026778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.026793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.029601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.029809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.029831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.032645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.032842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.032868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.035680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.035876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.035907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.038628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.038819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.038848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.041531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.041740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.041757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.044445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.044648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.044665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.047385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.047580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.047603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.050299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.050493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.050515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.053222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.053415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.053434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.056148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.056340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.056361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.059081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.059271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.059292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.061998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.062191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.062212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.064895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.065085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.065107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.067801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.067995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.068016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.070807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.071002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.071024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.073811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.074008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.074029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.076781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.076971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.076992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.079695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.079885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.079903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.082592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.082792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.082812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.085486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.085703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.085722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.088416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.088619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.088639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.091340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.091532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.091546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.094276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.094467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.094502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.097204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.097398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.097421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.100125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.100316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.100337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.103028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.067 [2024-11-04 14:48:51.103222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.067 [2024-11-04 14:48:51.103240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.067 [2024-11-04 14:48:51.105945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.106136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.106156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.108834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.109027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.109048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.111742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.111932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.111952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.114666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.114858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.114878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.117549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.117762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.117780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.120458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.120661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.120681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.123368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.123561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.123575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.126275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.126469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.126484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.129186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.129379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.129398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.132114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.132307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.132326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.135043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.135234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.135252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.137912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.138105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.138124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.140825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.141016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.141034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.143742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.143935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.143952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.146642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.146834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.146851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.149532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.149740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.149757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.152464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.152669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.152686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.155440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.155648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.155665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.158434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.158642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.158659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.161427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.161645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.161662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.164411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.164619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.164639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.167430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.167640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.167660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.170420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.170626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.170646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.173384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.173588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.173620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.176374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.176569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.176587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.179372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.179565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.179583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.182264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.068 [2024-11-04 14:48:51.182459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.068 [2024-11-04 14:48:51.182477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.068 [2024-11-04 14:48:51.185147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.069 [2024-11-04 14:48:51.185338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.069 [2024-11-04 14:48:51.185357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.069 [2024-11-04 14:48:51.188050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.069 [2024-11-04 14:48:51.188242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.069 [2024-11-04 14:48:51.188264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.069 [2024-11-04 14:48:51.190947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.069 [2024-11-04 14:48:51.191138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.069 [2024-11-04 14:48:51.191158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.069 [2024-11-04 14:48:51.193879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.069 [2024-11-04 14:48:51.194077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.069 [2024-11-04 14:48:51.194095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.069 [2024-11-04 14:48:51.196854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.069 [2024-11-04 14:48:51.197048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.069 [2024-11-04 14:48:51.197070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.069 [2024-11-04 14:48:51.199858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.069 [2024-11-04 14:48:51.200054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.069 [2024-11-04 14:48:51.200068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.069 [2024-11-04 14:48:51.202849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.069 [2024-11-04 14:48:51.203046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.069 [2024-11-04 14:48:51.203065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.328 [2024-11-04 14:48:51.205836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.328 [2024-11-04 14:48:51.206029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.328 [2024-11-04 14:48:51.206048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.328 [2024-11-04 14:48:51.208742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.328 [2024-11-04 14:48:51.208933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.328 [2024-11-04 14:48:51.208952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.328 [2024-11-04 14:48:51.211674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.328 [2024-11-04 14:48:51.211864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.328 [2024-11-04 14:48:51.211881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.328 [2024-11-04 14:48:51.214587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.328 [2024-11-04 14:48:51.214790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.328 [2024-11-04 14:48:51.214807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.328 [2024-11-04 14:48:51.217484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.328 [2024-11-04 14:48:51.217702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.328 [2024-11-04 14:48:51.217720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.328 [2024-11-04 14:48:51.220470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.328 [2024-11-04 14:48:51.220678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.328 [2024-11-04 14:48:51.220695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.328 [2024-11-04 14:48:51.223474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.328 [2024-11-04 14:48:51.223683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.223700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.226491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.329 [2024-11-04 14:48:51.226699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.226720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.229466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.329 [2024-11-04 14:48:51.229686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.229705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.232457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.329 [2024-11-04 14:48:51.232665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.232680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.235409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.329 [2024-11-04 14:48:51.235614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.235632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.238392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.329 [2024-11-04 14:48:51.238587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.238618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.241376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.329 [2024-11-04 14:48:51.241572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.241616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.244359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.329 [2024-11-04 14:48:51.244557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.244579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.247322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.329 [2024-11-04 14:48:51.247518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.247532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.250300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.329 [2024-11-04 14:48:51.250496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.250517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.253297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.329 [2024-11-04 14:48:51.253491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.253513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.256269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.329 [2024-11-04 14:48:51.256467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.256489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.259243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.329 [2024-11-04 14:48:51.259439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.259460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.262278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.329 [2024-11-04 14:48:51.262476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.262491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.265228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.329 [2024-11-04 14:48:51.265425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.265451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.268213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.329 [2024-11-04 14:48:51.268409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.268453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.271226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.329 [2024-11-04 14:48:51.271424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.271457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.274174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.329 [2024-11-04 14:48:51.274367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.274390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.277093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.329 [2024-11-04 14:48:51.277288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.277309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.280027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.329 [2024-11-04 14:48:51.280216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.329 [2024-11-04 14:48:51.280237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.329 [2024-11-04 14:48:51.283007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.330 [2024-11-04 14:48:51.283207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-04 14:48:51.283228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.330 [2024-11-04 14:48:51.286046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.330 [2024-11-04 14:48:51.286243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-04 14:48:51.286258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.330 [2024-11-04 14:48:51.288965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.330 [2024-11-04 14:48:51.289158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-04 14:48:51.289176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.330 [2024-11-04 14:48:51.291916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.330 [2024-11-04 14:48:51.292109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-04 14:48:51.292146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.330 [2024-11-04 14:48:51.294909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.330 [2024-11-04 14:48:51.295105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-04 14:48:51.295124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.330 [2024-11-04 14:48:51.297932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.330 [2024-11-04 14:48:51.298127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-04 14:48:51.298146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.330 [2024-11-04 14:48:51.300907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.330 [2024-11-04 14:48:51.301102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-04 14:48:51.301121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.330 [2024-11-04 14:48:51.303916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.330 [2024-11-04 14:48:51.304112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-04 14:48:51.304131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.330 [2024-11-04 14:48:51.306889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.330 [2024-11-04 14:48:51.307083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-04 14:48:51.307098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.330 [2024-11-04 14:48:51.309866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.330 [2024-11-04 14:48:51.310059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-04 14:48:51.310081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.330 [2024-11-04 14:48:51.312841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.330 [2024-11-04 14:48:51.313037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-04 14:48:51.313058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.330 [2024-11-04 14:48:51.315833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.330 [2024-11-04 14:48:51.316027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-04 14:48:51.316045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.330 [2024-11-04 14:48:51.318818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.330 [2024-11-04 14:48:51.319013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-04 14:48:51.319028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.330 [2024-11-04 14:48:51.321826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.330 [2024-11-04 14:48:51.322020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-04 14:48:51.322035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.330 [2024-11-04 14:48:51.324805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.330 [2024-11-04 14:48:51.324999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-04 14:48:51.325017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.330 [2024-11-04 14:48:51.327784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.330 [2024-11-04 14:48:51.327982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-04 14:48:51.328000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.330 [2024-11-04 14:48:51.330759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.330 [2024-11-04 14:48:51.330955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-04 14:48:51.330972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.330 [2024-11-04 14:48:51.333727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.330 [2024-11-04 14:48:51.333922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-04 14:48:51.333940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.330 [2024-11-04 14:48:51.336706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.330 [2024-11-04 14:48:51.336903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-04 14:48:51.336921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.330 [2024-11-04 14:48:51.339601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.340582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-04 14:48:51.340613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.331 10349.00 IOPS, 1293.62 MiB/s [2024-11-04T14:48:51.471Z] [2024-11-04 14:48:51.343500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.343706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-04 14:48:51.343726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.331 [2024-11-04 14:48:51.346502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.346712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-04 14:48:51.346732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.331 [2024-11-04 14:48:51.349466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.349689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-04 14:48:51.349707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.331 [2024-11-04 14:48:51.352487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.352696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-04 14:48:51.352709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.331 [2024-11-04 14:48:51.355487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.355697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-04 14:48:51.355714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.331 [2024-11-04 14:48:51.358458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.358666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-04 14:48:51.358686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.331 [2024-11-04 14:48:51.361426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.361645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-04 14:48:51.361704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.331 [2024-11-04 14:48:51.364353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.364550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-04 14:48:51.364570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.331 [2024-11-04 14:48:51.367265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.367460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-04 14:48:51.367481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.331 [2024-11-04 14:48:51.370256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.370452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-04 14:48:51.370473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.331 [2024-11-04 14:48:51.373249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.373444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-04 14:48:51.373468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.331 [2024-11-04 14:48:51.376237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.376435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-04 14:48:51.376456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.331 [2024-11-04 14:48:51.379216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.379411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-04 14:48:51.379432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.331 [2024-11-04 14:48:51.382196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.382395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-04 14:48:51.382417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.331 [2024-11-04 14:48:51.385167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.385366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-04 14:48:51.385387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.331 [2024-11-04 14:48:51.388161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.388356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-04 14:48:51.388377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.331 [2024-11-04 14:48:51.391146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.391345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-04 14:48:51.391367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.331 [2024-11-04 14:48:51.394125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.394323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-04 14:48:51.394344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.331 [2024-11-04 14:48:51.397101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.331 [2024-11-04 14:48:51.397296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.397326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.400098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.400293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.400314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.403079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.403275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.403296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.406077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.406273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.406293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.409047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.409244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.409259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.412029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.412224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.412245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.414999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.415195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.415216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.417970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.418165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.418186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.420951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.421146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.421164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.423949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.424145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.424164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.426923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.427119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.427138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.429970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.430172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.430192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.432982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.433177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.433196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.435956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.436152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.436173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.438940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.439135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.439158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.441941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.442142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.442163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.444917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.445113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.445135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.447902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.448100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.448115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.450891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.451085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.451100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.453874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.454069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-04 14:48:51.454084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.332 [2024-11-04 14:48:51.456866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.332 [2024-11-04 14:48:51.457065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.333 [2024-11-04 14:48:51.457086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.333 [2024-11-04 14:48:51.459838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.333 [2024-11-04 14:48:51.460031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.333 [2024-11-04 14:48:51.460055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.333 [2024-11-04 14:48:51.462856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.333 [2024-11-04 14:48:51.463051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.333 [2024-11-04 14:48:51.463070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.333 [2024-11-04 14:48:51.465858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.333 [2024-11-04 14:48:51.466051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.333 [2024-11-04 14:48:51.466076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.592 [2024-11-04 14:48:51.468869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.592 [2024-11-04 14:48:51.469066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-11-04 14:48:51.469085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.592 [2024-11-04 14:48:51.471878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.592 [2024-11-04 14:48:51.472074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-11-04 14:48:51.472092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.592 [2024-11-04 14:48:51.474835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.592 [2024-11-04 14:48:51.475031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-11-04 14:48:51.475049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.592 [2024-11-04 14:48:51.477811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.592 [2024-11-04 14:48:51.478007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-11-04 14:48:51.478028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.592 [2024-11-04 14:48:51.480802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.592 [2024-11-04 14:48:51.480998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-11-04 14:48:51.481027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.592 [2024-11-04 14:48:51.483805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.592 [2024-11-04 14:48:51.484000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-11-04 14:48:51.484021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.592 [2024-11-04 14:48:51.486778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.592 [2024-11-04 14:48:51.486973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-11-04 14:48:51.486993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.489767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.489962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.489982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.492764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.492959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.492985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.495778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.495973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.495994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.498775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.498969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.498989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.501766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.501962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.501983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.504736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.504933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.504953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.507725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.507923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.507944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.510720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.510918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.510938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.513693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.513891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.513912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.516699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.516894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.516915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.519671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.519864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.519882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.522639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.522834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.522851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.525620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.525813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.525832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.528562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.528770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.528787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.531516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.531719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.531739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.534426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.534628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.534645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.537323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.537515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.537536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.540212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.540407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.540425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.543157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.543351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.543372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.546129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.546327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.546348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.549089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.549288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.549309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.552066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.552257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.552278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.555010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.555206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.555228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.557978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.558177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.558205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.560976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.593 [2024-11-04 14:48:51.561174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.593 [2024-11-04 14:48:51.561192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.593 [2024-11-04 14:48:51.563926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.564117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.564135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.566837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.567027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.567045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.569734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.569925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.569943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.572646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.572836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.572853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.575535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.575736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.575755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.578453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.578656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.578675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.581421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.581635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.581652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.584394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.584589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.584616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.587365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.587561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.587582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.590331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.590523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.590544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.593317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.593514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.593534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.596268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.596465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.596486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.599254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.599452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.599473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.602300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.602496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.602517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.605263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.605461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.605482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.608246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.608444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.608465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.611218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.611414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.611434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.614214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.614410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.614432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.617193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.617390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.617410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.620176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.620370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.620391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.623156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.623352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.623373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.626136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.626334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.626355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.629089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.629284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.629305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.632080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.632280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.632301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.635068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.635263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.635283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.638022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.638218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.594 [2024-11-04 14:48:51.638239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.594 [2024-11-04 14:48:51.640981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.594 [2024-11-04 14:48:51.641176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.641197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.643976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.644172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.644190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.646954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.647148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.647169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.649956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.650150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.650171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.652961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.653158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.653179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.655906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.656095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.656116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.658821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.659012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.659033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.661732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.661924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.661944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.664631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.664823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.664843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.667518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.667718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.667738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.670420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.670624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.670643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.673323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.673514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.673534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.676234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.676424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.676445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.679117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.679307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.679327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.682061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.682256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.682279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.684995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.685195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.685216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.687967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.688166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.688188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.690938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.691135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.691157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.693935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.694133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.694155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.696900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.697098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.697119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.699879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.700073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.700088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.702865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.703062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.703084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.705840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.706037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.706058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.708820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.709015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.709036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.711807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.712006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.712027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.714821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.715016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.715034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.595 [2024-11-04 14:48:51.717765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.595 [2024-11-04 14:48:51.717959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.595 [2024-11-04 14:48:51.717977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.596 [2024-11-04 14:48:51.721860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.596 [2024-11-04 14:48:51.722061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.596 [2024-11-04 14:48:51.722076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.596 [2024-11-04 14:48:51.724866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.596 [2024-11-04 14:48:51.725063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.596 [2024-11-04 14:48:51.725085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.596 [2024-11-04 14:48:51.727871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.596 [2024-11-04 14:48:51.728069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.596 [2024-11-04 14:48:51.728084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.855 [2024-11-04 14:48:51.730883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.855 [2024-11-04 14:48:51.731083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.855 [2024-11-04 14:48:51.731102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.855 [2024-11-04 14:48:51.733911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.855 [2024-11-04 14:48:51.734106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.855 [2024-11-04 14:48:51.734125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.855 [2024-11-04 14:48:51.736896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.855 [2024-11-04 14:48:51.737092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.855 [2024-11-04 14:48:51.737113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.855 [2024-11-04 14:48:51.739898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.855 [2024-11-04 14:48:51.740097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.855 [2024-11-04 14:48:51.740116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.855 [2024-11-04 14:48:51.742912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.855 [2024-11-04 14:48:51.743108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.855 [2024-11-04 14:48:51.743130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.855 [2024-11-04 14:48:51.745926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.855 [2024-11-04 14:48:51.746121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.855 [2024-11-04 14:48:51.746143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.855 [2024-11-04 14:48:51.748914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.855 [2024-11-04 14:48:51.749114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.855 [2024-11-04 14:48:51.749129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.855 [2024-11-04 14:48:51.751871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.855 [2024-11-04 14:48:51.752063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.855 [2024-11-04 14:48:51.752083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.855 [2024-11-04 14:48:51.754838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.855 [2024-11-04 14:48:51.755028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.855 [2024-11-04 14:48:51.755047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.855 [2024-11-04 14:48:51.757713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.855 [2024-11-04 14:48:51.757905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.855 [2024-11-04 14:48:51.757926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.855 [2024-11-04 14:48:51.760687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.855 [2024-11-04 14:48:51.760886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.855 [2024-11-04 14:48:51.760907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.855 [2024-11-04 14:48:51.763704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.855 [2024-11-04 14:48:51.763901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.855 [2024-11-04 14:48:51.763922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.855 [2024-11-04 14:48:51.766692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.855 [2024-11-04 14:48:51.766888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.855 [2024-11-04 14:48:51.766910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.855 [2024-11-04 14:48:51.769704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.855 [2024-11-04 14:48:51.769899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.855 [2024-11-04 14:48:51.769919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.855 [2024-11-04 14:48:51.772712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.855 [2024-11-04 14:48:51.772908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.855 [2024-11-04 14:48:51.772929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.855 [2024-11-04 14:48:51.775691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.855 [2024-11-04 14:48:51.775886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.775907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.778690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.778884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.778902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.781656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.781854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.781872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.784653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.784848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.784869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.787636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.787833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.787854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.790642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.790840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.790869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.793660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.793854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.793876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.796674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.796869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.796889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.799703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.799899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.799919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.802708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.802902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.802923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.805671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.805869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.805889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.808679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.808876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.808897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.811663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.811858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.811872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.814671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.814866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.814887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.817664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.817859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.817897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.820670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.820864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.820886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.823673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.823870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.823891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.826672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.826868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.826889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.829669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.829863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.829884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.832635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.832829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.832850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.835601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.835805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.835826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.838585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.838793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.838807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.841537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.841758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.841777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.844544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.844750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.844768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.847528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.847735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.847755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.850522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.850729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.850749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.853530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.853748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.856 [2024-11-04 14:48:51.853768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.856 [2024-11-04 14:48:51.856546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.856 [2024-11-04 14:48:51.856751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.856772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.859511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.859718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.859735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.862486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.862696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.862716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.865505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.865727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.865744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.868500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.868706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.868726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.871435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.871646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.871665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.874429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.874639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.874658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.877427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.877648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.877662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.880419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.880629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.880649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.883420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.883627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.883654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.886442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.886653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.886685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.889442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.889661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.889713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.892468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.892681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.892695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.895432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.895639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.895659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.898402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.898598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.898628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.901375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.901571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.901615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.904326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.904517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.904538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.907236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.907430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.907445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.910164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.910357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.910378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.913095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.913288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.913309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.915986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.916177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.916198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.918903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.919093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.919112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.921831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.922023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.922044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.924718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.924909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.924929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.927599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.927800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.927821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.930486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.930690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.930710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.933389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.933588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.933618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.857 [2024-11-04 14:48:51.936310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.857 [2024-11-04 14:48:51.936503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.857 [2024-11-04 14:48:51.936533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.858 [2024-11-04 14:48:51.939215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.858 [2024-11-04 14:48:51.939410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.858 [2024-11-04 14:48:51.939424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.858 [2024-11-04 14:48:51.942136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.858 [2024-11-04 14:48:51.942329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.858 [2024-11-04 14:48:51.942350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.858 [2024-11-04 14:48:51.945040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.858 [2024-11-04 14:48:51.945230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.858 [2024-11-04 14:48:51.945251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.858 [2024-11-04 14:48:51.947925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.858 [2024-11-04 14:48:51.948118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.858 [2024-11-04 14:48:51.948138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.858 [2024-11-04 14:48:51.950861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.858 [2024-11-04 14:48:51.951051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.858 [2024-11-04 14:48:51.951072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.858 [2024-11-04 14:48:51.953754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.858 [2024-11-04 14:48:51.953944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.858 [2024-11-04 14:48:51.953965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.858 [2024-11-04 14:48:51.956685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.858 [2024-11-04 14:48:51.956882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.858 [2024-11-04 14:48:51.956902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.858 [2024-11-04 14:48:51.959652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.858 [2024-11-04 14:48:51.959846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.858 [2024-11-04 14:48:51.959867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.858 [2024-11-04 14:48:51.962643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.858 [2024-11-04 14:48:51.962838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.858 [2024-11-04 14:48:51.962859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.858 [2024-11-04 14:48:51.965576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.858 [2024-11-04 14:48:51.965784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.858 [2024-11-04 14:48:51.965805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.858 [2024-11-04 14:48:51.968536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.858 [2024-11-04 14:48:51.968746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.858 [2024-11-04 14:48:51.968766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.858 [2024-11-04 14:48:51.971528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.858 [2024-11-04 14:48:51.971737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.858 [2024-11-04 14:48:51.971758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.858 [2024-11-04 14:48:51.974514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.858 [2024-11-04 14:48:51.974720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.858 [2024-11-04 14:48:51.974740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.858 [2024-11-04 14:48:51.977503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.858 [2024-11-04 14:48:51.977718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.858 [2024-11-04 14:48:51.977738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.858 [2024-11-04 14:48:51.980457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.858 [2024-11-04 14:48:51.980665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.858 [2024-11-04 14:48:51.980685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.858 [2024-11-04 14:48:51.983432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.858 [2024-11-04 14:48:51.983633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.858 [2024-11-04 14:48:51.983653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.858 [2024-11-04 14:48:51.986353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.858 [2024-11-04 14:48:51.986546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.858 [2024-11-04 14:48:51.986568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.858 [2024-11-04 14:48:51.989327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.858 [2024-11-04 14:48:51.989524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.858 [2024-11-04 14:48:51.989550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.858 [2024-11-04 14:48:51.992325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:42.858 [2024-11-04 14:48:51.992522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.858 [2024-11-04 14:48:51.992543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:51.995348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:51.995544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:51.995559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:51.998349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:51.998545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:51.998566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:52.001321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:52.001516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:52.001537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:52.004299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:52.004493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:52.004513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:52.007281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:52.007480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:52.007501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:52.010263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:52.010461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:52.010482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:52.013211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:52.013406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:52.013427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:52.016204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:52.016400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:52.016421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:52.019187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:52.019383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:52.019404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:52.022158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:52.022354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:52.022375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:52.025133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:52.025331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:52.025352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:52.028094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:52.028289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:52.028304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:52.031081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:52.031278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:52.031299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:52.034062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:52.034257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:52.034278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:52.037008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:52.037203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:52.037224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:52.039982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:52.040179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:52.040193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:52.042995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:52.043193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:52.043208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:52.045985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:52.046184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:52.046203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:52.048955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:52.049152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:52.049176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:52.051942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:52.052137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.118 [2024-11-04 14:48:52.052161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.118 [2024-11-04 14:48:52.054921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.118 [2024-11-04 14:48:52.055116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.055130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.057898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.058093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.058115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.060875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.061072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.061093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.063838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.064032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.064053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.066827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.067022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.067037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.069805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.070000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.070021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.072774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.072969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.072990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.075780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.075974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.075989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.078796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.078991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.079011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.081798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.081993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.082013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.084781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.084978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.084998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.087751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.087945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.087966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.090751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.090946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.090967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.093739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.093936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.093960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.096739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.096938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.096959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.099741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.099940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.099961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.102712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.102903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.102923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.105588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.105791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.105812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.108500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.108703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.108723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.111471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.111680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.111697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.114460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.114668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.114685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.117404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.117629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.117648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.120342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.120533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.120554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.123241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.123435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.123457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.126230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.126426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.126449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.129170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.129361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.129382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.132085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.119 [2024-11-04 14:48:52.132275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.119 [2024-11-04 14:48:52.132296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.119 [2024-11-04 14:48:52.134997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.135191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.135211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.137905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.138097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.138117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.140826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.141018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.141038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.143741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.143934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.143955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.146706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.146900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.146921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.149600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.149804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.149822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.152510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.152711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.152735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.155430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.155633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.155647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.158333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.158528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.158541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.161245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.161438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.161456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.164137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.164329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.164347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.167064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.167255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.167273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.169980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.170176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.170198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.172954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.173150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.173165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.175917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.176113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.176134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.178910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.179107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.179128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.182243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.182440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.182461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.185225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.185420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.185444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.188228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.188424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.188445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.191190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.191383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.191404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.194107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.194303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.194324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.197134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.197334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.197356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.200151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.200347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.200369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.203148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.203346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.203370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.206070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.206261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.206276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.208986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.209180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.209194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.120 [2024-11-04 14:48:52.211889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.120 [2024-11-04 14:48:52.212080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.120 [2024-11-04 14:48:52.212101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.121 [2024-11-04 14:48:52.214802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.121 [2024-11-04 14:48:52.214995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.121 [2024-11-04 14:48:52.215015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.121 [2024-11-04 14:48:52.217706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.121 [2024-11-04 14:48:52.217896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.121 [2024-11-04 14:48:52.217917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.121 [2024-11-04 14:48:52.220593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.121 [2024-11-04 14:48:52.220795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.121 [2024-11-04 14:48:52.220816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.121 [2024-11-04 14:48:52.223509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.121 [2024-11-04 14:48:52.223713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.121 [2024-11-04 14:48:52.223734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.121 [2024-11-04 14:48:52.226411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.121 [2024-11-04 14:48:52.226615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.121 [2024-11-04 14:48:52.226633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.121 [2024-11-04 14:48:52.229345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.121 [2024-11-04 14:48:52.229543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.121 [2024-11-04 14:48:52.229564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.121 [2024-11-04 14:48:52.232327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.121 [2024-11-04 14:48:52.232526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.121 [2024-11-04 14:48:52.232547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.121 [2024-11-04 14:48:52.235333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.121 [2024-11-04 14:48:52.235529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.121 [2024-11-04 14:48:52.235550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.121 [2024-11-04 14:48:52.238290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.121 [2024-11-04 14:48:52.238487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.121 [2024-11-04 14:48:52.238508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.121 [2024-11-04 14:48:52.241260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.121 [2024-11-04 14:48:52.241452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.121 [2024-11-04 14:48:52.241471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.121 [2024-11-04 14:48:52.244211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.121 [2024-11-04 14:48:52.244407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.121 [2024-11-04 14:48:52.244422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.121 [2024-11-04 14:48:52.247218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.121 [2024-11-04 14:48:52.247415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.121 [2024-11-04 14:48:52.247436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.121 [2024-11-04 14:48:52.250200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.121 [2024-11-04 14:48:52.250399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.121 [2024-11-04 14:48:52.250420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.121 [2024-11-04 14:48:52.253188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.121 [2024-11-04 14:48:52.253386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.121 [2024-11-04 14:48:52.253407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.121 [2024-11-04 14:48:52.256191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.379 [2024-11-04 14:48:52.256389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.379 [2024-11-04 14:48:52.256410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.379 [2024-11-04 14:48:52.259186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.379 [2024-11-04 14:48:52.259383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.379 [2024-11-04 14:48:52.259402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.379 [2024-11-04 14:48:52.262151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.379 [2024-11-04 14:48:52.262349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.379 [2024-11-04 14:48:52.262364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.379 [2024-11-04 14:48:52.265118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.379 [2024-11-04 14:48:52.265314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.379 [2024-11-04 14:48:52.265335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.379 [2024-11-04 14:48:52.268119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.379 [2024-11-04 14:48:52.268315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.379 [2024-11-04 14:48:52.268334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.379 [2024-11-04 14:48:52.271077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.379 [2024-11-04 14:48:52.271274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.379 [2024-11-04 14:48:52.271298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.379 [2024-11-04 14:48:52.274034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.379 [2024-11-04 14:48:52.274228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.379 [2024-11-04 14:48:52.274251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.379 [2024-11-04 14:48:52.276933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.379 [2024-11-04 14:48:52.277127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.379 [2024-11-04 14:48:52.277148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.379 [2024-11-04 14:48:52.279890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.379 [2024-11-04 14:48:52.280090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.379 [2024-11-04 14:48:52.280111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.379 [2024-11-04 14:48:52.282865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.379 [2024-11-04 14:48:52.283072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.379 [2024-11-04 14:48:52.283093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.379 [2024-11-04 14:48:52.285854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.379 [2024-11-04 14:48:52.286054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.379 [2024-11-04 14:48:52.286074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.379 [2024-11-04 14:48:52.288827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.379 [2024-11-04 14:48:52.289024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.379 [2024-11-04 14:48:52.289045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.379 [2024-11-04 14:48:52.291821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.379 [2024-11-04 14:48:52.292015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.379 [2024-11-04 14:48:52.292037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.379 [2024-11-04 14:48:52.294760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.379 [2024-11-04 14:48:52.294958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.379 [2024-11-04 14:48:52.294977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.379 [2024-11-04 14:48:52.297786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.379 [2024-11-04 14:48:52.297984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.379 [2024-11-04 14:48:52.298005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.379 [2024-11-04 14:48:52.300771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.379 [2024-11-04 14:48:52.300968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.379 [2024-11-04 14:48:52.300989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.379 [2024-11-04 14:48:52.303767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.379 [2024-11-04 14:48:52.303965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.379 [2024-11-04 14:48:52.303985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.379 [2024-11-04 14:48:52.306776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.380 [2024-11-04 14:48:52.306978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.380 [2024-11-04 14:48:52.306998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.380 [2024-11-04 14:48:52.309815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.380 [2024-11-04 14:48:52.310013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.380 [2024-11-04 14:48:52.310034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.380 [2024-11-04 14:48:52.312812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.380 [2024-11-04 14:48:52.313010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.380 [2024-11-04 14:48:52.313030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.380 [2024-11-04 14:48:52.315817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.380 [2024-11-04 14:48:52.316017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.380 [2024-11-04 14:48:52.316037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.380 [2024-11-04 14:48:52.318815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.380 [2024-11-04 14:48:52.319018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.380 [2024-11-04 14:48:52.319039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.380 [2024-11-04 14:48:52.321818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.380 [2024-11-04 14:48:52.322014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.380 [2024-11-04 14:48:52.322035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.380 [2024-11-04 14:48:52.324790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.380 [2024-11-04 14:48:52.324987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.380 [2024-11-04 14:48:52.325007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.380 [2024-11-04 14:48:52.327773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.380 [2024-11-04 14:48:52.327974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.380 [2024-11-04 14:48:52.327994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.380 [2024-11-04 14:48:52.330760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.380 [2024-11-04 14:48:52.330959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.380 [2024-11-04 14:48:52.330979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.380 [2024-11-04 14:48:52.333748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.380 [2024-11-04 14:48:52.333948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.380 [2024-11-04 14:48:52.333969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.380 [2024-11-04 14:48:52.336746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.380 [2024-11-04 14:48:52.336945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.380 [2024-11-04 14:48:52.336965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.380 10382.00 IOPS, 1297.75 MiB/s [2024-11-04T14:48:52.520Z] [2024-11-04 14:48:52.340656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af0a90) with pdu=0x2000166fef90 00:22:43.380 [2024-11-04 14:48:52.340854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.380 [2024-11-04 14:48:52.340876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.380 00:22:43.380 Latency(us) 00:22:43.380 [2024-11-04T14:48:52.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.380 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:43.380 nvme0n1 : 2.00 10378.22 1297.28 0.00 0.00 1538.72 1090.17 9275.86 00:22:43.380 [2024-11-04T14:48:52.520Z] =================================================================================================================== 00:22:43.380 [2024-11-04T14:48:52.520Z] Total : 10378.22 1297.28 0.00 0.00 1538.72 1090.17 9275.86 00:22:43.380 { 00:22:43.380 "results": [ 00:22:43.380 { 00:22:43.380 "job": "nvme0n1", 00:22:43.380 "core_mask": "0x2", 00:22:43.380 "workload": "randwrite", 00:22:43.380 "status": "finished", 00:22:43.380 "queue_depth": 16, 00:22:43.380 "io_size": 131072, 00:22:43.380 "runtime": 2.002849, 00:22:43.380 "iops": 10378.216230978971, 00:22:43.380 "mibps": 1297.2770288723714, 00:22:43.380 "io_failed": 0, 00:22:43.380 "io_timeout": 0, 00:22:43.380 "avg_latency_us": 1538.7207330377696, 00:22:43.380 "min_latency_us": 1090.1661538461537, 00:22:43.380 "max_latency_us": 9275.864615384615 00:22:43.380 } 00:22:43.380 ], 00:22:43.380 "core_count": 1 00:22:43.380 } 00:22:43.380 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:43.380 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:43.380 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:43.380 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:43.380 | .driver_specific 00:22:43.380 | .nvme_error 00:22:43.380 | .status_code 00:22:43.380 | .command_transient_transport_error' 00:22:43.637 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 670 > 0 )) 00:22:43.637 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 78904 00:22:43.637 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 78904 ']' 00:22:43.637 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 78904 00:22:43.637 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:22:43.637 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:43.637 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78904 00:22:43.637 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:43.637 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:43.637 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78904' 00:22:43.637 killing process with pid 78904 00:22:43.637 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 78904 00:22:43.637 Received shutdown signal, test time was about 2.000000 seconds 00:22:43.637 00:22:43.637 Latency(us) 00:22:43.637 [2024-11-04T14:48:52.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.637 [2024-11-04T14:48:52.777Z] =================================================================================================================== 00:22:43.637 [2024-11-04T14:48:52.777Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.637 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 78904 00:22:43.637 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 78716 00:22:43.637 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 78716 ']' 00:22:43.637 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 78716 00:22:43.637 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:22:43.638 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:43.638 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78716 00:22:43.638 killing process with pid 78716 00:22:43.638 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:43.638 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:43.638 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78716' 00:22:43.638 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 78716 00:22:43.638 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 78716 00:22:43.896 00:22:43.896 real 0m15.653s 00:22:43.896 user 0m30.053s 00:22:43.896 sys 0m3.592s 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:43.896 ************************************ 00:22:43.896 END TEST nvmf_digest_error 00:22:43.896 ************************************ 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:43.896 rmmod nvme_tcp 00:22:43.896 rmmod nvme_fabrics 00:22:43.896 rmmod nvme_keyring 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 78716 ']' 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 78716 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 78716 ']' 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 78716 00:22:43.896 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (78716) - No such process 00:22:43.896 Process with pid 78716 is not found 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 78716 is not found' 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:43.896 14:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:43.896 14:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:43.896 14:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:43.896 14:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:43.896 14:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:43.896 14:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:43.896 14:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:22:44.154 00:22:44.154 real 0m33.089s 00:22:44.154 user 1m2.484s 00:22:44.154 sys 0m7.390s 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:44.154 ************************************ 00:22:44.154 END TEST nvmf_digest 00:22:44.154 ************************************ 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.154 ************************************ 00:22:44.154 START TEST nvmf_host_multipath 00:22:44.154 ************************************ 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:44.154 * Looking for test storage... 00:22:44.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:22:44.154 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:44.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.412 --rc genhtml_branch_coverage=1 00:22:44.412 --rc genhtml_function_coverage=1 00:22:44.412 --rc genhtml_legend=1 00:22:44.412 --rc geninfo_all_blocks=1 00:22:44.412 --rc geninfo_unexecuted_blocks=1 00:22:44.412 00:22:44.412 ' 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:44.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.412 --rc genhtml_branch_coverage=1 00:22:44.412 --rc genhtml_function_coverage=1 00:22:44.412 --rc genhtml_legend=1 00:22:44.412 --rc geninfo_all_blocks=1 00:22:44.412 --rc geninfo_unexecuted_blocks=1 00:22:44.412 00:22:44.412 ' 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:44.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.412 --rc genhtml_branch_coverage=1 00:22:44.412 --rc genhtml_function_coverage=1 00:22:44.412 --rc genhtml_legend=1 00:22:44.412 --rc geninfo_all_blocks=1 00:22:44.412 --rc geninfo_unexecuted_blocks=1 00:22:44.412 00:22:44.412 ' 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:44.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.412 --rc genhtml_branch_coverage=1 00:22:44.412 --rc genhtml_function_coverage=1 00:22:44.412 --rc genhtml_legend=1 00:22:44.412 --rc geninfo_all_blocks=1 00:22:44.412 --rc geninfo_unexecuted_blocks=1 00:22:44.412 00:22:44.412 ' 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.412 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:44.413 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:44.413 Cannot find device "nvmf_init_br" 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:44.413 Cannot find device "nvmf_init_br2" 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:44.413 Cannot find device "nvmf_tgt_br" 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:44.413 Cannot find device "nvmf_tgt_br2" 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:44.413 Cannot find device "nvmf_init_br" 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:44.413 Cannot find device "nvmf_init_br2" 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:44.413 Cannot find device "nvmf_tgt_br" 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:44.413 Cannot find device "nvmf_tgt_br2" 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:44.413 Cannot find device "nvmf_br" 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:44.413 Cannot find device "nvmf_init_if" 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:22:44.413 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:44.413 Cannot find device "nvmf_init_if2" 00:22:44.414 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:22:44.414 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:44.414 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:44.414 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:22:44.414 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:44.414 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:44.414 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:22:44.414 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:44.414 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:44.414 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:44.414 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:44.414 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:44.414 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:44.414 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:44.672 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:44.672 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:22:44.672 00:22:44.672 --- 10.0.0.3 ping statistics --- 00:22:44.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.672 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:44.672 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:44.672 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:22:44.672 00:22:44.672 --- 10.0.0.4 ping statistics --- 00:22:44.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.672 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:44.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.014 ms 00:22:44.672 00:22:44.672 --- 10.0.0.1 ping statistics --- 00:22:44.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.672 rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:44.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:22:44.672 00:22:44.672 --- 10.0.0.2 ping statistics --- 00:22:44.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.672 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:22:44.672 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:44.673 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.673 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:44.673 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:44.673 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.673 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:44.673 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:44.673 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:22:44.673 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:44.673 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:44.673 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:44.673 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=79217 00:22:44.673 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 79217 00:22:44.673 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 79217 ']' 00:22:44.673 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.673 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:44.673 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:44.673 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.673 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:44.673 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:44.673 [2024-11-04 14:48:53.714149] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:22:44.673 [2024-11-04 14:48:53.714204] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.930 [2024-11-04 14:48:53.854567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:44.930 [2024-11-04 14:48:53.889077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.930 [2024-11-04 14:48:53.889116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.930 [2024-11-04 14:48:53.889122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.930 [2024-11-04 14:48:53.889127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.930 [2024-11-04 14:48:53.889131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.930 [2024-11-04 14:48:53.889842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.930 [2024-11-04 14:48:53.890078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.930 [2024-11-04 14:48:53.920163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:44.930 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:44.930 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:22:44.930 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:44.930 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:44.930 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:44.930 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.930 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=79217 00:22:44.930 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:45.187 [2024-11-04 14:48:54.179238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.187 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:45.444 Malloc0 00:22:45.444 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:45.701 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:45.701 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:45.959 [2024-11-04 14:48:55.007146] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:45.959 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:46.229 [2024-11-04 14:48:55.203234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:46.229 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=79259 00:22:46.229 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:46.229 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:46.229 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 79259 /var/tmp/bdevperf.sock 00:22:46.229 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 79259 ']' 00:22:46.229 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.229 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:46.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.229 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.229 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:46.229 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:47.161 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:47.161 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:22:47.161 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:47.161 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:47.419 Nvme0n1 00:22:47.678 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:47.936 Nvme0n1 00:22:47.936 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:22:47.936 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:48.870 14:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:22:48.870 14:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:49.128 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:49.128 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:22:49.128 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79217 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:49.128 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79303 00:22:49.128 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:55.742 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:55.742 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:55.742 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:55.742 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:55.742 Attaching 4 probes... 00:22:55.742 @path[10.0.0.3, 4421]: 26189 00:22:55.742 @path[10.0.0.3, 4421]: 26936 00:22:55.742 @path[10.0.0.3, 4421]: 26857 00:22:55.742 @path[10.0.0.3, 4421]: 27036 00:22:55.742 @path[10.0.0.3, 4421]: 27075 00:22:55.742 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:55.742 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:55.742 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:55.742 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:55.742 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:55.742 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:55.742 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79303 00:22:55.742 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:55.742 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:22:55.742 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:55.742 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:56.012 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:22:56.012 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79422 00:22:56.012 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:56.012 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79217 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:02.566 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:02.566 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:02.566 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:23:02.566 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:02.566 Attaching 4 probes... 00:23:02.566 @path[10.0.0.3, 4420]: 25602 00:23:02.566 @path[10.0.0.3, 4420]: 26114 00:23:02.566 @path[10.0.0.3, 4420]: 26119 00:23:02.566 @path[10.0.0.3, 4420]: 26190 00:23:02.566 @path[10.0.0.3, 4420]: 25989 00:23:02.566 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:02.566 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:02.566 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:02.566 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:23:02.566 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:02.566 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:02.566 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79422 00:23:02.566 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:02.566 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:02.566 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:02.566 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:02.566 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:02.566 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79531 00:23:02.566 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:02.566 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79217 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:09.119 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:09.119 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:09.119 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:09.119 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:09.119 Attaching 4 probes... 00:23:09.119 @path[10.0.0.3, 4421]: 16856 00:23:09.119 @path[10.0.0.3, 4421]: 26157 00:23:09.119 @path[10.0.0.3, 4421]: 26237 00:23:09.119 @path[10.0.0.3, 4421]: 26415 00:23:09.119 @path[10.0.0.3, 4421]: 26733 00:23:09.119 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:09.119 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:09.119 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:09.119 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:09.119 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:09.119 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:09.119 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79531 00:23:09.119 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:09.119 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:09.119 14:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:09.119 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:09.119 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:09.119 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79649 00:23:09.119 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79217 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:09.119 14:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:15.669 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:15.669 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:23:15.669 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:23:15.669 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:15.669 Attaching 4 probes... 00:23:15.669 00:23:15.669 00:23:15.669 00:23:15.669 00:23:15.669 00:23:15.669 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:15.669 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:15.669 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:15.669 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:23:15.669 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:23:15.669 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:23:15.669 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79649 00:23:15.669 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:15.669 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:23:15.669 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:15.669 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:15.927 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:23:15.927 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79217 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:15.927 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79764 00:23:15.927 14:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:22.483 14:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:22.483 14:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:22.483 14:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:22.483 14:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:22.483 Attaching 4 probes... 00:23:22.483 @path[10.0.0.3, 4421]: 25343 00:23:22.483 @path[10.0.0.3, 4421]: 26088 00:23:22.483 @path[10.0.0.3, 4421]: 26309 00:23:22.483 @path[10.0.0.3, 4421]: 26032 00:23:22.483 @path[10.0.0.3, 4421]: 25917 00:23:22.483 14:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:22.483 14:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:22.483 14:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:22.483 14:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:22.483 14:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:22.483 14:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:22.483 14:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79764 00:23:22.483 14:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:22.483 14:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:22.483 14:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:23:23.416 14:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:23:23.416 14:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79893 00:23:23.416 14:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:23.416 14:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79217 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:29.969 14:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:29.969 14:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:29.969 14:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:23:29.969 14:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:29.969 Attaching 4 probes... 00:23:29.969 @path[10.0.0.3, 4420]: 24795 00:23:29.969 @path[10.0.0.3, 4420]: 24936 00:23:29.969 @path[10.0.0.3, 4420]: 24416 00:23:29.969 @path[10.0.0.3, 4420]: 25003 00:23:29.969 @path[10.0.0.3, 4420]: 25049 00:23:29.969 14:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:29.969 14:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:29.969 14:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:29.969 14:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:23:29.969 14:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:29.969 14:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:29.969 14:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79893 00:23:29.969 14:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:29.969 14:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:29.969 [2024-11-04 14:49:38.669450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:29.969 14:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:29.969 14:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:23:36.528 14:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:23:36.528 14:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80077 00:23:36.528 14:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:36.528 14:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79217 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:41.823 14:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:41.823 14:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:42.081 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:42.082 Attaching 4 probes... 00:23:42.082 @path[10.0.0.3, 4421]: 25322 00:23:42.082 @path[10.0.0.3, 4421]: 25983 00:23:42.082 @path[10.0.0.3, 4421]: 25876 00:23:42.082 @path[10.0.0.3, 4421]: 26002 00:23:42.082 @path[10.0.0.3, 4421]: 25791 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80077 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 79259 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 79259 ']' 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 79259 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79259 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79259' 00:23:42.082 killing process with pid 79259 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 79259 00:23:42.082 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 79259 00:23:42.082 { 00:23:42.082 "results": [ 00:23:42.082 { 00:23:42.082 "job": "Nvme0n1", 00:23:42.082 "core_mask": "0x4", 00:23:42.082 "workload": "verify", 00:23:42.082 "status": "terminated", 00:23:42.082 "verify_range": { 00:23:42.082 "start": 0, 00:23:42.082 "length": 16384 00:23:42.082 }, 00:23:42.082 "queue_depth": 128, 00:23:42.082 "io_size": 4096, 00:23:42.082 "runtime": 54.213994, 00:23:42.082 "iops": 10979.858816526228, 00:23:42.082 "mibps": 42.89007350205558, 00:23:42.082 "io_failed": 0, 00:23:42.082 "io_timeout": 0, 00:23:42.082 "avg_latency_us": 11634.739118976182, 00:23:42.082 "min_latency_us": 475.7661538461538, 00:23:42.082 "max_latency_us": 7020619.618461538 00:23:42.082 } 00:23:42.082 ], 00:23:42.082 "core_count": 1 00:23:42.082 } 00:23:42.349 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 79259 00:23:42.349 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:42.349 [2024-11-04 14:48:55.257870] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:23:42.349 [2024-11-04 14:48:55.257938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79259 ] 00:23:42.349 [2024-11-04 14:48:55.394539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.349 [2024-11-04 14:48:55.429214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.349 [2024-11-04 14:48:55.459936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:42.349 Running I/O for 90 seconds... 00:23:42.349 9227.00 IOPS, 36.04 MiB/s [2024-11-04T14:49:51.489Z] 11364.00 IOPS, 44.39 MiB/s [2024-11-04T14:49:51.489Z] 12052.33 IOPS, 47.08 MiB/s [2024-11-04T14:49:51.489Z] 12403.25 IOPS, 48.45 MiB/s [2024-11-04T14:49:51.489Z] 12607.40 IOPS, 49.25 MiB/s [2024-11-04T14:49:51.489Z] 12764.83 IOPS, 49.86 MiB/s [2024-11-04T14:49:51.489Z] 12871.57 IOPS, 50.28 MiB/s [2024-11-04T14:49:51.489Z] [2024-11-04 14:49:04.909614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.349 [2024-11-04 14:49:04.909664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:42.349 [2024-11-04 14:49:04.909698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.349 [2024-11-04 14:49:04.909707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:42.349 [2024-11-04 14:49:04.909721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.349 [2024-11-04 14:49:04.909728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:42.349 [2024-11-04 14:49:04.909741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.349 [2024-11-04 14:49:04.909748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:42.349 [2024-11-04 14:49:04.909761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.349 [2024-11-04 14:49:04.909768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:42.349 [2024-11-04 14:49:04.909781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.349 [2024-11-04 14:49:04.909788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:42.349 [2024-11-04 14:49:04.909801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.349 [2024-11-04 14:49:04.909808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:42.349 [2024-11-04 14:49:04.909821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.349 [2024-11-04 14:49:04.909828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:42.349 [2024-11-04 14:49:04.909840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.349 [2024-11-04 14:49:04.909847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:42.349 [2024-11-04 14:49:04.909860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.349 [2024-11-04 14:49:04.909886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:42.349 [2024-11-04 14:49:04.909899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.349 [2024-11-04 14:49:04.909906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:42.349 [2024-11-04 14:49:04.909919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.349 [2024-11-04 14:49:04.909926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:42.349 [2024-11-04 14:49:04.909939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.349 [2024-11-04 14:49:04.909946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:42.349 [2024-11-04 14:49:04.909958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.349 [2024-11-04 14:49:04.909965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:42.349 [2024-11-04 14:49:04.909978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.349 [2024-11-04 14:49:04.909985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.909997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.350 [2024-11-04 14:49:04.910005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.350 [2024-11-04 14:49:04.910350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.350 [2024-11-04 14:49:04.910370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.350 [2024-11-04 14:49:04.910396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.350 [2024-11-04 14:49:04.910416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.350 [2024-11-04 14:49:04.910435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.350 [2024-11-04 14:49:04.910455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.350 [2024-11-04 14:49:04.910475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.350 [2024-11-04 14:49:04.910495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.350 [2024-11-04 14:49:04.910514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.350 [2024-11-04 14:49:04.910534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.350 [2024-11-04 14:49:04.910554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.350 [2024-11-04 14:49:04.910574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.350 [2024-11-04 14:49:04.910594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.350 [2024-11-04 14:49:04.910621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.350 [2024-11-04 14:49:04.910643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.350 [2024-11-04 14:49:04.910663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.350 [2024-11-04 14:49:04.910741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:42.350 [2024-11-04 14:49:04.910754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.351 [2024-11-04 14:49:04.910761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.910773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.351 [2024-11-04 14:49:04.910781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.910793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.351 [2024-11-04 14:49:04.910800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.910813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.351 [2024-11-04 14:49:04.910820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.910832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.351 [2024-11-04 14:49:04.910839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.910851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.351 [2024-11-04 14:49:04.910858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.910871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.351 [2024-11-04 14:49:04.910881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.910894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.351 [2024-11-04 14:49:04.910901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.910913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.351 [2024-11-04 14:49:04.910920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.910933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.351 [2024-11-04 14:49:04.910939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.910952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.351 [2024-11-04 14:49:04.910959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.910971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.351 [2024-11-04 14:49:04.910978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.910993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.351 [2024-11-04 14:49:04.911000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.911012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.351 [2024-11-04 14:49:04.911019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.911040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.351 [2024-11-04 14:49:04.911047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.911059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.351 [2024-11-04 14:49:04.911066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.911079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.351 [2024-11-04 14:49:04.911086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.911098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.351 [2024-11-04 14:49:04.911105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.911117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.351 [2024-11-04 14:49:04.911125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.911140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.351 [2024-11-04 14:49:04.911148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.911160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.351 [2024-11-04 14:49:04.911167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.911180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.351 [2024-11-04 14:49:04.911187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.911199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.351 [2024-11-04 14:49:04.911207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.911219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.351 [2024-11-04 14:49:04.911226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.911239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.351 [2024-11-04 14:49:04.911246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.911258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.351 [2024-11-04 14:49:04.911266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.911279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.351 [2024-11-04 14:49:04.911285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.911298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.351 [2024-11-04 14:49:04.911305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.911318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.351 [2024-11-04 14:49:04.911325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.911338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.351 [2024-11-04 14:49:04.911345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.911360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.351 [2024-11-04 14:49:04.911367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:42.351 [2024-11-04 14:49:04.911383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.351 [2024-11-04 14:49:04.911390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.911410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.911430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.911451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.911470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.911490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.911510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.911530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.911550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.911570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.911590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.911617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.911639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.352 [2024-11-04 14:49:04.911661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.352 [2024-11-04 14:49:04.911681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.352 [2024-11-04 14:49:04.911703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.352 [2024-11-04 14:49:04.911723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.352 [2024-11-04 14:49:04.911743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.352 [2024-11-04 14:49:04.911763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.352 [2024-11-04 14:49:04.911783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.352 [2024-11-04 14:49:04.911803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.911823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.911843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.911862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.911885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.911906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.911926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.911939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.911946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.912982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.352 [2024-11-04 14:49:04.913001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.913018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.352 [2024-11-04 14:49:04.913025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.913038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.352 [2024-11-04 14:49:04.913045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.913061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.352 [2024-11-04 14:49:04.913067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.913080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.352 [2024-11-04 14:49:04.913087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.913100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.352 [2024-11-04 14:49:04.913107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.913119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.352 [2024-11-04 14:49:04.913126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.913139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.352 [2024-11-04 14:49:04.913147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.913243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.352 [2024-11-04 14:49:04.913259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.913273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.352 [2024-11-04 14:49:04.913281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:42.352 [2024-11-04 14:49:04.913293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.352 [2024-11-04 14:49:04.913301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:04.913313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:04.913320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:04.913332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:04.913339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:04.913352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:04.913359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:04.913371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:04.913379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:04.913391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:04.913399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:04.913413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:04.913420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:42.353 12928.75 IOPS, 50.50 MiB/s [2024-11-04T14:49:51.493Z] 12918.89 IOPS, 50.46 MiB/s [2024-11-04T14:49:51.493Z] 12928.60 IOPS, 50.50 MiB/s [2024-11-04T14:49:51.493Z] 12940.18 IOPS, 50.55 MiB/s [2024-11-04T14:49:51.493Z] 12953.17 IOPS, 50.60 MiB/s [2024-11-04T14:49:51.493Z] 12967.85 IOPS, 50.66 MiB/s [2024-11-04T14:49:51.493Z] 12957.00 IOPS, 50.61 MiB/s [2024-11-04T14:49:51.493Z] [2024-11-04 14:49:11.344440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:11.344491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:11.344534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:11.344553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:11.344590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:11.344620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:11.344640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:11.344659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:11.344678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:11.344697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:11.344717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:11.344736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:11.344755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:11.344774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:11.344793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:11.344812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.353 [2024-11-04 14:49:11.344835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.353 [2024-11-04 14:49:11.344856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.353 [2024-11-04 14:49:11.344876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.353 [2024-11-04 14:49:11.344896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.353 [2024-11-04 14:49:11.344915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.353 [2024-11-04 14:49:11.344935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.353 [2024-11-04 14:49:11.344955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:42.353 [2024-11-04 14:49:11.344967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.354 [2024-11-04 14:49:11.344974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.344987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.354 [2024-11-04 14:49:11.344994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.354 [2024-11-04 14:49:11.345014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.354 [2024-11-04 14:49:11.345033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.354 [2024-11-04 14:49:11.345053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.354 [2024-11-04 14:49:11.345075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.354 [2024-11-04 14:49:11.345095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.354 [2024-11-04 14:49:11.345114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.354 [2024-11-04 14:49:11.345133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.354 [2024-11-04 14:49:11.345153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.354 [2024-11-04 14:49:11.345174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.354 [2024-11-04 14:49:11.345195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.354 [2024-11-04 14:49:11.345214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.354 [2024-11-04 14:49:11.345234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.354 [2024-11-04 14:49:11.345253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.354 [2024-11-04 14:49:11.345273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.354 [2024-11-04 14:49:11.345293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.354 [2024-11-04 14:49:11.345312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.354 [2024-11-04 14:49:11.345335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.354 [2024-11-04 14:49:11.345354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.354 [2024-11-04 14:49:11.345374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.354 [2024-11-04 14:49:11.345393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.354 [2024-11-04 14:49:11.345412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.354 [2024-11-04 14:49:11.345431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.354 [2024-11-04 14:49:11.345451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.354 [2024-11-04 14:49:11.345470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.354 [2024-11-04 14:49:11.345489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.354 [2024-11-04 14:49:11.345510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:42.354 [2024-11-04 14:49:11.345522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.354 [2024-11-04 14:49:11.345529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.345549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.345574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.345594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.345621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.345656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.345677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.345697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.345717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.345737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.345756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.345776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.345796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.345816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.355 [2024-11-04 14:49:11.345857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.355 [2024-11-04 14:49:11.345877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.355 [2024-11-04 14:49:11.345897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.355 [2024-11-04 14:49:11.345916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.355 [2024-11-04 14:49:11.345936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.355 [2024-11-04 14:49:11.345956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.355 [2024-11-04 14:49:11.345975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.345987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.355 [2024-11-04 14:49:11.345994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.346007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.346015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.346028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.346035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.346047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.346055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.346068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.346075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.346087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.346097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.346110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.346117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.346129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.346137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.346149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.355 [2024-11-04 14:49:11.346156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.346169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.355 [2024-11-04 14:49:11.346176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.346189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.355 [2024-11-04 14:49:11.346196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.346209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.355 [2024-11-04 14:49:11.346216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.346229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.355 [2024-11-04 14:49:11.346235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.346248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.355 [2024-11-04 14:49:11.346255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.346267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.355 [2024-11-04 14:49:11.346274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.346287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.355 [2024-11-04 14:49:11.346294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:42.355 [2024-11-04 14:49:11.346306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.355 [2024-11-04 14:49:11.346313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.356 [2024-11-04 14:49:11.346335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.356 [2024-11-04 14:49:11.346358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.356 [2024-11-04 14:49:11.346377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.356 [2024-11-04 14:49:11.346396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.356 [2024-11-04 14:49:11.346416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.356 [2024-11-04 14:49:11.346435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.356 [2024-11-04 14:49:11.346454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.356 [2024-11-04 14:49:11.346473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.356 [2024-11-04 14:49:11.346494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.356 [2024-11-04 14:49:11.346514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.356 [2024-11-04 14:49:11.346534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.356 [2024-11-04 14:49:11.346553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.356 [2024-11-04 14:49:11.346573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.356 [2024-11-04 14:49:11.346596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.356 [2024-11-04 14:49:11.346623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.356 [2024-11-04 14:49:11.346643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.356 [2024-11-04 14:49:11.346662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.356 [2024-11-04 14:49:11.346682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.356 [2024-11-04 14:49:11.346702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.356 [2024-11-04 14:49:11.346721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.356 [2024-11-04 14:49:11.346741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.356 [2024-11-04 14:49:11.346760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.346773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.356 [2024-11-04 14:49:11.346780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.347268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.356 [2024-11-04 14:49:11.347281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.347301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.356 [2024-11-04 14:49:11.347308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.347327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.356 [2024-11-04 14:49:11.347340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.347358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.356 [2024-11-04 14:49:11.347366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.347384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.356 [2024-11-04 14:49:11.347391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.347410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.356 [2024-11-04 14:49:11.347417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.347435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.356 [2024-11-04 14:49:11.347442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.347460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.356 [2024-11-04 14:49:11.347467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.347515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.356 [2024-11-04 14:49:11.347524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:42.356 [2024-11-04 14:49:11.347543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.356 [2024-11-04 14:49:11.347551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:11.347570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:11.347577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:11.347596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:11.347603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:11.347632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:11.347640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:11.347658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:11.347665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:11.347684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:11.347695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:11.347714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:11.347722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:11.347742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:11.347750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.357 12442.00 IOPS, 48.60 MiB/s [2024-11-04T14:49:51.497Z] 12148.81 IOPS, 47.46 MiB/s [2024-11-04T14:49:51.497Z] 12202.65 IOPS, 47.67 MiB/s [2024-11-04T14:49:51.497Z] 12254.83 IOPS, 47.87 MiB/s [2024-11-04T14:49:51.497Z] 12304.26 IOPS, 48.06 MiB/s [2024-11-04T14:49:51.497Z] 12357.45 IOPS, 48.27 MiB/s [2024-11-04T14:49:51.497Z] 12400.24 IOPS, 48.44 MiB/s [2024-11-04T14:49:51.497Z] [2024-11-04 14:49:18.187266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.357 [2024-11-04 14:49:18.187582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.357 [2024-11-04 14:49:18.187601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.357 [2024-11-04 14:49:18.187630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.357 [2024-11-04 14:49:18.187649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.357 [2024-11-04 14:49:18.187669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.357 [2024-11-04 14:49:18.187689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.357 [2024-11-04 14:49:18.187707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.357 [2024-11-04 14:49:18.187726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.357 [2024-11-04 14:49:18.187921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:42.357 [2024-11-04 14:49:18.187933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.187939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.187951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.187958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.187970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.187976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.187990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.358 [2024-11-04 14:49:18.188133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.358 [2024-11-04 14:49:18.188152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.358 [2024-11-04 14:49:18.188172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.358 [2024-11-04 14:49:18.188191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.358 [2024-11-04 14:49:18.188210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.358 [2024-11-04 14:49:18.188230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.358 [2024-11-04 14:49:18.188252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.358 [2024-11-04 14:49:18.188271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.358 [2024-11-04 14:49:18.188290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.358 [2024-11-04 14:49:18.188310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.358 [2024-11-04 14:49:18.188329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.358 [2024-11-04 14:49:18.188348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.358 [2024-11-04 14:49:18.188367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.358 [2024-11-04 14:49:18.188385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.358 [2024-11-04 14:49:18.188404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.358 [2024-11-04 14:49:18.188423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.358 [2024-11-04 14:49:18.188679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:42.358 [2024-11-04 14:49:18.188692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.359 [2024-11-04 14:49:18.188698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.188710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.188720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.188732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.188739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.188752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.188758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.188771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.188778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.188790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.188797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.188809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.188816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.188828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.188834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.188847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.188853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.188865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.359 [2024-11-04 14:49:18.188872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.188884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.359 [2024-11-04 14:49:18.188891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.188904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.359 [2024-11-04 14:49:18.188912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.188924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.359 [2024-11-04 14:49:18.188931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.188944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.359 [2024-11-04 14:49:18.188953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.188966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.359 [2024-11-04 14:49:18.188972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.188984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.359 [2024-11-04 14:49:18.188991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.189003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.359 [2024-11-04 14:49:18.189010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.189022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.359 [2024-11-04 14:49:18.189029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.189041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.359 [2024-11-04 14:49:18.189047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.189059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.189071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.189083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.189090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.189102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.189109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.189121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.189128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.189140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.189146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.189158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.189165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.189177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.189184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.189198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.189205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.189217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.189224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.189238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.189245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.189257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.189264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.189276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.189283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.189295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.189302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.189314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.359 [2024-11-04 14:49:18.189320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:42.359 [2024-11-04 14:49:18.189333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.360 [2024-11-04 14:49:18.189339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.189910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.360 [2024-11-04 14:49:18.189927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.189946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.189956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.189974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.189981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.189998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:18.190576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:18.190584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:42.360 11988.95 IOPS, 46.83 MiB/s [2024-11-04T14:49:51.500Z] 11467.70 IOPS, 44.80 MiB/s [2024-11-04T14:49:51.500Z] 10989.88 IOPS, 42.93 MiB/s [2024-11-04T14:49:51.500Z] 10550.28 IOPS, 41.21 MiB/s [2024-11-04T14:49:51.500Z] 10144.50 IOPS, 39.63 MiB/s [2024-11-04T14:49:51.500Z] 9768.78 IOPS, 38.16 MiB/s [2024-11-04T14:49:51.500Z] 9419.89 IOPS, 36.80 MiB/s [2024-11-04T14:49:51.500Z] 9420.34 IOPS, 36.80 MiB/s [2024-11-04T14:49:51.500Z] 9540.20 IOPS, 37.27 MiB/s [2024-11-04T14:49:51.500Z] 9656.97 IOPS, 37.72 MiB/s [2024-11-04T14:49:51.500Z] 9761.94 IOPS, 38.13 MiB/s [2024-11-04T14:49:51.500Z] 9859.82 IOPS, 38.51 MiB/s [2024-11-04T14:49:51.500Z] 9950.53 IOPS, 38.87 MiB/s [2024-11-04T14:49:51.500Z] [2024-11-04 14:49:31.248012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:31.248061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:31.248096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:31.248105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:31.248133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:31.248141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:31.248153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:31.248160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:31.248172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:31.248179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:31.248191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:31.248197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:31.248209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:31.248215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:31.248227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.360 [2024-11-04 14:49:31.248234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:31.248246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.360 [2024-11-04 14:49:31.248252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:31.248265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.360 [2024-11-04 14:49:31.248271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:42.360 [2024-11-04 14:49:31.248283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.360 [2024-11-04 14:49:31.248289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.361 [2024-11-04 14:49:31.248747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.361 [2024-11-04 14:49:31.248763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.361 [2024-11-04 14:49:31.248778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.361 [2024-11-04 14:49:31.248793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.361 [2024-11-04 14:49:31.248808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.361 [2024-11-04 14:49:31.248823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.361 [2024-11-04 14:49:31.248837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.361 [2024-11-04 14:49:31.248857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.248988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.361 [2024-11-04 14:49:31.248997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.361 [2024-11-04 14:49:31.249004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.362 [2024-11-04 14:49:31.249109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.362 [2024-11-04 14:49:31.249124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.362 [2024-11-04 14:49:31.249139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.362 [2024-11-04 14:49:31.249153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.362 [2024-11-04 14:49:31.249168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.362 [2024-11-04 14:49:31.249182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.362 [2024-11-04 14:49:31.249196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.362 [2024-11-04 14:49:31.249210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.362 [2024-11-04 14:49:31.249346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.362 [2024-11-04 14:49:31.249362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.362 [2024-11-04 14:49:31.249376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.362 [2024-11-04 14:49:31.249391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.362 [2024-11-04 14:49:31.249408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.362 [2024-11-04 14:49:31.249423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.362 [2024-11-04 14:49:31.249437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.362 [2024-11-04 14:49:31.249452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.362 [2024-11-04 14:49:31.249576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.362 [2024-11-04 14:49:31.249590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.362 [2024-11-04 14:49:31.249601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.362 [2024-11-04 14:49:31.249616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.249630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.249644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.249659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.249673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.249688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.249709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.249723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.249742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.249756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.249771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.249786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.249800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.249818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.249832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.363 [2024-11-04 14:49:31.249847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.363 [2024-11-04 14:49:31.249862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.363 [2024-11-04 14:49:31.249878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.363 [2024-11-04 14:49:31.249892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.363 [2024-11-04 14:49:31.249908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.363 [2024-11-04 14:49:31.249923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.363 [2024-11-04 14:49:31.249937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.363 [2024-11-04 14:49:31.249953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.249968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.249986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.249994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.250004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.250012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.250019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.250027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.250034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.250042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.250049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.250057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.250064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.363 [2024-11-04 14:49:31.250072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.363 [2024-11-04 14:49:31.250079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.364 [2024-11-04 14:49:31.250087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.364 [2024-11-04 14:49:31.250094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.364 [2024-11-04 14:49:31.250102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.364 [2024-11-04 14:49:31.250109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.364 [2024-11-04 14:49:31.250117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.364 [2024-11-04 14:49:31.250123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.364 [2024-11-04 14:49:31.250131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.364 [2024-11-04 14:49:31.250139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.364 [2024-11-04 14:49:31.250147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.364 [2024-11-04 14:49:31.250153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.364 [2024-11-04 14:49:31.250161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.364 [2024-11-04 14:49:31.250169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.364 [2024-11-04 14:49:31.250177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.364 [2024-11-04 14:49:31.250184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.364 [2024-11-04 14:49:31.250194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886310 is same with the state(6) to be set 00:23:42.364 [2024-11-04 14:49:31.250204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:42.364 [2024-11-04 14:49:31.250209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:42.364 [2024-11-04 14:49:31.250214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117800 len:8 PRP1 0x0 PRP2 0x0 00:23:42.364 [2024-11-04 14:49:31.250224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.364 [2024-11-04 14:49:31.251097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:42.364 [2024-11-04 14:49:31.251149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.364 [2024-11-04 14:49:31.251160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.364 [2024-11-04 14:49:31.251179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f2e50 (9): Bad file descriptor 00:23:42.364 [2024-11-04 14:49:31.251444] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.364 [2024-11-04 14:49:31.251463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e50 with addr=10.0.0.3, port=4421 00:23:42.364 [2024-11-04 14:49:31.251471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2e50 is same with the state(6) to be set 00:23:42.364 [2024-11-04 14:49:31.251487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f2e50 (9): Bad file descriptor 00:23:42.364 [2024-11-04 14:49:31.251503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:42.364 [2024-11-04 14:49:31.251511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:42.364 [2024-11-04 14:49:31.251518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:42.364 [2024-11-04 14:49:31.251526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:42.364 [2024-11-04 14:49:31.251533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:42.364 10029.46 IOPS, 39.18 MiB/s [2024-11-04T14:49:51.504Z] 10095.31 IOPS, 39.43 MiB/s [2024-11-04T14:49:51.504Z] 10162.14 IOPS, 39.70 MiB/s [2024-11-04T14:49:51.504Z] 10220.39 IOPS, 39.92 MiB/s [2024-11-04T14:49:51.504Z] 10273.82 IOPS, 40.13 MiB/s [2024-11-04T14:49:51.504Z] 10330.17 IOPS, 40.35 MiB/s [2024-11-04T14:49:51.504Z] 10383.59 IOPS, 40.56 MiB/s [2024-11-04T14:49:51.504Z] 10434.45 IOPS, 40.76 MiB/s [2024-11-04T14:49:51.504Z] 10483.88 IOPS, 40.95 MiB/s [2024-11-04T14:49:51.504Z] 10530.52 IOPS, 41.13 MiB/s [2024-11-04T14:49:51.504Z] [2024-11-04 14:49:41.301633] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:42.364 10581.42 IOPS, 41.33 MiB/s [2024-11-04T14:49:51.504Z] 10632.83 IOPS, 41.53 MiB/s [2024-11-04T14:49:51.504Z] 10683.62 IOPS, 41.73 MiB/s [2024-11-04T14:49:51.504Z] 10731.96 IOPS, 41.92 MiB/s [2024-11-04T14:49:51.504Z] 10772.61 IOPS, 42.08 MiB/s [2024-11-04T14:49:51.504Z] 10817.24 IOPS, 42.25 MiB/s [2024-11-04T14:49:51.504Z] 10858.31 IOPS, 42.42 MiB/s [2024-11-04T14:49:51.504Z] 10899.77 IOPS, 42.58 MiB/s [2024-11-04T14:49:51.504Z] 10938.38 IOPS, 42.73 MiB/s [2024-11-04T14:49:51.504Z] 10974.19 IOPS, 42.87 MiB/s [2024-11-04T14:49:51.504Z] Received shutdown signal, test time was about 54.214662 seconds 00:23:42.364 00:23:42.364 Latency(us) 00:23:42.364 [2024-11-04T14:49:51.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.364 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:42.364 Verification LBA range: start 0x0 length 0x4000 00:23:42.364 Nvme0n1 : 54.21 10979.86 42.89 0.00 0.00 11634.74 475.77 7020619.62 00:23:42.364 [2024-11-04T14:49:51.504Z] =================================================================================================================== 00:23:42.364 [2024-11-04T14:49:51.504Z] Total : 10979.86 42.89 0.00 0.00 11634.74 475.77 7020619.62 00:23:42.364 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:42.364 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:23:42.364 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:42.364 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:23:42.364 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:42.364 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:23:42.623 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:42.623 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:23:42.623 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:42.623 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:42.623 rmmod nvme_tcp 00:23:42.623 rmmod nvme_fabrics 00:23:42.623 rmmod nvme_keyring 00:23:42.623 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:42.623 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:23:42.623 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:23:42.623 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 79217 ']' 00:23:42.623 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 79217 00:23:42.623 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 79217 ']' 00:23:42.623 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 79217 00:23:42.623 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:23:42.623 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:42.623 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79217 00:23:42.623 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:42.623 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:42.623 killing process with pid 79217 00:23:42.623 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79217' 00:23:42.623 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 79217 00:23:42.624 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 79217 00:23:42.624 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:42.624 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:42.624 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:42.624 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:23:42.624 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:23:42.624 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:23:42.624 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:42.624 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:42.624 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:42.624 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:42.624 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:42.624 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:42.624 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:23:42.882 00:23:42.882 real 0m58.730s 00:23:42.882 user 2m45.414s 00:23:42.882 sys 0m14.094s 00:23:42.882 ************************************ 00:23:42.882 END TEST nvmf_host_multipath 00:23:42.882 ************************************ 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.882 ************************************ 00:23:42.882 START TEST nvmf_timeout 00:23:42.882 ************************************ 00:23:42.882 14:49:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:43.140 * Looking for test storage... 00:23:43.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:43.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.140 --rc genhtml_branch_coverage=1 00:23:43.140 --rc genhtml_function_coverage=1 00:23:43.140 --rc genhtml_legend=1 00:23:43.140 --rc geninfo_all_blocks=1 00:23:43.140 --rc geninfo_unexecuted_blocks=1 00:23:43.140 00:23:43.140 ' 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:43.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.140 --rc genhtml_branch_coverage=1 00:23:43.140 --rc genhtml_function_coverage=1 00:23:43.140 --rc genhtml_legend=1 00:23:43.140 --rc geninfo_all_blocks=1 00:23:43.140 --rc geninfo_unexecuted_blocks=1 00:23:43.140 00:23:43.140 ' 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:43.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.140 --rc genhtml_branch_coverage=1 00:23:43.140 --rc genhtml_function_coverage=1 00:23:43.140 --rc genhtml_legend=1 00:23:43.140 --rc geninfo_all_blocks=1 00:23:43.140 --rc geninfo_unexecuted_blocks=1 00:23:43.140 00:23:43.140 ' 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:43.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.140 --rc genhtml_branch_coverage=1 00:23:43.140 --rc genhtml_function_coverage=1 00:23:43.140 --rc genhtml_legend=1 00:23:43.140 --rc geninfo_all_blocks=1 00:23:43.140 --rc geninfo_unexecuted_blocks=1 00:23:43.140 00:23:43.140 ' 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:43.140 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:43.141 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:43.141 Cannot find device "nvmf_init_br" 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:43.141 Cannot find device "nvmf_init_br2" 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:43.141 Cannot find device "nvmf_tgt_br" 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:43.141 Cannot find device "nvmf_tgt_br2" 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:43.141 Cannot find device "nvmf_init_br" 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:43.141 Cannot find device "nvmf_init_br2" 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:43.141 Cannot find device "nvmf_tgt_br" 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:43.141 Cannot find device "nvmf_tgt_br2" 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:43.141 Cannot find device "nvmf_br" 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:43.141 Cannot find device "nvmf_init_if" 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:43.141 Cannot find device "nvmf_init_if2" 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:43.141 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:43.141 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:43.141 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:43.440 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:43.441 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:43.441 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:23:43.441 00:23:43.441 --- 10.0.0.3 ping statistics --- 00:23:43.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.441 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:43.441 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:43.441 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:23:43.441 00:23:43.441 --- 10.0.0.4 ping statistics --- 00:23:43.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.441 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:43.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:23:43.441 00:23:43.441 --- 10.0.0.1 ping statistics --- 00:23:43.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.441 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:43.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:23:43.441 00:23:43.441 --- 10.0.0.2 ping statistics --- 00:23:43.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.441 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=80427 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 80427 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 80427 ']' 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:43.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:43.441 14:49:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:43.441 [2024-11-04 14:49:52.487363] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:23:43.441 [2024-11-04 14:49:52.487418] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.700 [2024-11-04 14:49:52.628542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:43.700 [2024-11-04 14:49:52.664319] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.700 [2024-11-04 14:49:52.664358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.700 [2024-11-04 14:49:52.664364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.700 [2024-11-04 14:49:52.664370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.700 [2024-11-04 14:49:52.664375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.700 [2024-11-04 14:49:52.665046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.700 [2024-11-04 14:49:52.665078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.700 [2024-11-04 14:49:52.697999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:44.265 14:49:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:44.265 14:49:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:23:44.265 14:49:53 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:44.265 14:49:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:44.265 14:49:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:44.265 14:49:53 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.265 14:49:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:44.265 14:49:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:44.528 [2024-11-04 14:49:53.574355] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.528 14:49:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:44.789 Malloc0 00:23:44.789 14:49:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:45.057 14:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:45.315 14:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:45.315 [2024-11-04 14:49:54.444253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:45.572 14:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=80476 00:23:45.572 14:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 80476 /var/tmp/bdevperf.sock 00:23:45.572 14:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:45.572 14:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 80476 ']' 00:23:45.572 14:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:45.572 14:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:45.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:45.572 14:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:45.573 14:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:45.573 14:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:45.573 [2024-11-04 14:49:54.498326] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:23:45.573 [2024-11-04 14:49:54.498389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80476 ] 00:23:45.573 [2024-11-04 14:49:54.633186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.573 [2024-11-04 14:49:54.670319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.573 [2024-11-04 14:49:54.703371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:46.506 14:49:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:46.506 14:49:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:23:46.506 14:49:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:46.506 14:49:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:46.763 NVMe0n1 00:23:46.763 14:49:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:46.763 14:49:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=80500 00:23:46.763 14:49:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:23:47.021 Running I/O for 10 seconds... 00:23:47.955 14:49:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:47.955 10702.00 IOPS, 41.80 MiB/s [2024-11-04T14:49:57.095Z] [2024-11-04 14:49:57.050839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1089b30 is same with the state(6) to be set 00:23:47.955 [2024-11-04 14:49:57.051270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.955 [2024-11-04 14:49:57.051305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.955 [2024-11-04 14:49:57.051321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.955 [2024-11-04 14:49:57.051327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.955 [2024-11-04 14:49:57.051336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.955 [2024-11-04 14:49:57.051342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.955 [2024-11-04 14:49:57.051350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.955 [2024-11-04 14:49:57.051355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.955 [2024-11-04 14:49:57.051363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.955 [2024-11-04 14:49:57.051369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.955 [2024-11-04 14:49:57.051376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.955 [2024-11-04 14:49:57.051381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.955 [2024-11-04 14:49:57.051389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.955 [2024-11-04 14:49:57.051394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.955 [2024-11-04 14:49:57.051402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.955 [2024-11-04 14:49:57.051408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.955 [2024-11-04 14:49:57.051415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.955 [2024-11-04 14:49:57.051421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.955 [2024-11-04 14:49:57.051428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.955 [2024-11-04 14:49:57.051433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.955 [2024-11-04 14:49:57.051441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.955 [2024-11-04 14:49:57.051446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.955 [2024-11-04 14:49:57.051453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.955 [2024-11-04 14:49:57.051458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.956 [2024-11-04 14:49:57.051549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.956 [2024-11-04 14:49:57.051562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.956 [2024-11-04 14:49:57.051574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.956 [2024-11-04 14:49:57.051588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.956 [2024-11-04 14:49:57.051601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.956 [2024-11-04 14:49:57.051624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.956 [2024-11-04 14:49:57.051637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.956 [2024-11-04 14:49:57.051650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.956 [2024-11-04 14:49:57.051663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.956 [2024-11-04 14:49:57.051676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.956 [2024-11-04 14:49:57.051689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.956 [2024-11-04 14:49:57.051703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.956 [2024-11-04 14:49:57.051716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.956 [2024-11-04 14:49:57.051728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.956 [2024-11-04 14:49:57.051743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.956 [2024-11-04 14:49:57.051757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.956 [2024-11-04 14:49:57.051965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.956 [2024-11-04 14:49:57.051978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.956 [2024-11-04 14:49:57.051986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.956 [2024-11-04 14:49:57.051991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.051999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.957 [2024-11-04 14:49:57.052004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.957 [2024-11-04 14:49:57.052017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.957 [2024-11-04 14:49:57.052029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.957 [2024-11-04 14:49:57.052042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.957 [2024-11-04 14:49:57.052056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.957 [2024-11-04 14:49:57.052069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.957 [2024-11-04 14:49:57.052288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.957 [2024-11-04 14:49:57.052301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.957 [2024-11-04 14:49:57.052314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.957 [2024-11-04 14:49:57.052327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.957 [2024-11-04 14:49:57.052340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.957 [2024-11-04 14:49:57.052353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.957 [2024-11-04 14:49:57.052367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.957 [2024-11-04 14:49:57.052380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.957 [2024-11-04 14:49:57.052503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.957 [2024-11-04 14:49:57.052510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.958 [2024-11-04 14:49:57.052516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.958 [2024-11-04 14:49:57.052529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.958 [2024-11-04 14:49:57.052542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.958 [2024-11-04 14:49:57.052555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.958 [2024-11-04 14:49:57.052567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.958 [2024-11-04 14:49:57.052583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.958 [2024-11-04 14:49:57.052597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.958 [2024-11-04 14:49:57.052617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.958 [2024-11-04 14:49:57.052630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.958 [2024-11-04 14:49:57.052643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.958 [2024-11-04 14:49:57.052655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.958 [2024-11-04 14:49:57.052668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.958 [2024-11-04 14:49:57.052682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.958 [2024-11-04 14:49:57.052695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.958 [2024-11-04 14:49:57.052709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.958 [2024-11-04 14:49:57.052722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.958 [2024-11-04 14:49:57.052735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.958 [2024-11-04 14:49:57.052747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550280 is same with the state(6) to be set 00:23:47.958 [2024-11-04 14:49:57.052761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.958 [2024-11-04 14:49:57.052766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.958 [2024-11-04 14:49:57.052772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96352 len:8 PRP1 0x0 PRP2 0x0 00:23:47.958 [2024-11-04 14:49:57.052777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.958 [2024-11-04 14:49:57.052788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.958 [2024-11-04 14:49:57.052795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96904 len:8 PRP1 0x0 PRP2 0x0 00:23:47.958 [2024-11-04 14:49:57.052801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.958 [2024-11-04 14:49:57.052811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.958 [2024-11-04 14:49:57.052816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96912 len:8 PRP1 0x0 PRP2 0x0 00:23:47.958 [2024-11-04 14:49:57.052821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.958 [2024-11-04 14:49:57.052831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.958 [2024-11-04 14:49:57.052836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96920 len:8 PRP1 0x0 PRP2 0x0 00:23:47.958 [2024-11-04 14:49:57.052842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.958 [2024-11-04 14:49:57.052852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.958 [2024-11-04 14:49:57.052857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96928 len:8 PRP1 0x0 PRP2 0x0 00:23:47.958 [2024-11-04 14:49:57.052863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.958 [2024-11-04 14:49:57.052874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.958 [2024-11-04 14:49:57.052879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96936 len:8 PRP1 0x0 PRP2 0x0 00:23:47.958 [2024-11-04 14:49:57.052885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.958 [2024-11-04 14:49:57.052894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.958 [2024-11-04 14:49:57.052899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96944 len:8 PRP1 0x0 PRP2 0x0 00:23:47.958 [2024-11-04 14:49:57.052904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.958 [2024-11-04 14:49:57.052914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.958 [2024-11-04 14:49:57.052919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96952 len:8 PRP1 0x0 PRP2 0x0 00:23:47.958 [2024-11-04 14:49:57.052924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.958 [2024-11-04 14:49:57.052934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.958 [2024-11-04 14:49:57.052939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96960 len:8 PRP1 0x0 PRP2 0x0 00:23:47.958 [2024-11-04 14:49:57.052944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.958 [2024-11-04 14:49:57.052954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.958 [2024-11-04 14:49:57.052961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96968 len:8 PRP1 0x0 PRP2 0x0 00:23:47.958 [2024-11-04 14:49:57.052966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.958 [2024-11-04 14:49:57.052976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.958 [2024-11-04 14:49:57.052981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96976 len:8 PRP1 0x0 PRP2 0x0 00:23:47.958 [2024-11-04 14:49:57.052987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.052992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.958 [2024-11-04 14:49:57.052996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.958 [2024-11-04 14:49:57.053001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96984 len:8 PRP1 0x0 PRP2 0x0 00:23:47.958 [2024-11-04 14:49:57.053006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.958 [2024-11-04 14:49:57.053013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.958 [2024-11-04 14:49:57.053017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.958 [2024-11-04 14:49:57.053021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96992 len:8 PRP1 0x0 PRP2 0x0 00:23:47.959 [2024-11-04 14:49:57.053027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.959 [2024-11-04 14:49:57.053034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.959 [2024-11-04 14:49:57.053038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.959 [2024-11-04 14:49:57.053043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97000 len:8 PRP1 0x0 PRP2 0x0 00:23:47.959 [2024-11-04 14:49:57.053049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.959 [2024-11-04 14:49:57.053055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.959 [2024-11-04 14:49:57.053059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.959 [2024-11-04 14:49:57.053063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97008 len:8 PRP1 0x0 PRP2 0x0 00:23:47.959 [2024-11-04 14:49:57.053069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.959 [2024-11-04 14:49:57.053075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.959 [2024-11-04 14:49:57.053079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.959 [2024-11-04 14:49:57.053084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97016 len:8 PRP1 0x0 PRP2 0x0 00:23:47.959 [2024-11-04 14:49:57.053089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.959 [2024-11-04 14:49:57.053095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.959 [2024-11-04 14:49:57.053099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.959 [2024-11-04 14:49:57.053103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97024 len:8 PRP1 0x0 PRP2 0x0 00:23:47.959 [2024-11-04 14:49:57.053109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.959 [2024-11-04 14:49:57.053114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.959 [2024-11-04 14:49:57.053119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.959 [2024-11-04 14:49:57.053125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97032 len:8 PRP1 0x0 PRP2 0x0 00:23:47.959 [2024-11-04 14:49:57.053130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.959 [2024-11-04 14:49:57.053136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.959 [2024-11-04 14:49:57.053140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.959 [2024-11-04 14:49:57.053145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97040 len:8 PRP1 0x0 PRP2 0x0 00:23:47.959 [2024-11-04 14:49:57.053150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.959 [2024-11-04 14:49:57.053413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:47.959 [2024-11-04 14:49:57.053474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e2e50 (9): Bad file descriptor 00:23:47.959 [2024-11-04 14:49:57.053543] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.959 [2024-11-04 14:49:57.053563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4e2e50 with addr=10.0.0.3, port=4420 00:23:47.959 [2024-11-04 14:49:57.053570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4e2e50 is same with the state(6) to be set 00:23:47.959 [2024-11-04 14:49:57.053581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e2e50 (9): Bad file descriptor 00:23:47.959 [2024-11-04 14:49:57.053591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:47.959 [2024-11-04 14:49:57.053596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:47.959 [2024-11-04 14:49:57.053616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:47.959 [2024-11-04 14:49:57.053624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:47.959 [2024-11-04 14:49:57.053630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:47.959 14:49:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:23:49.823 6001.50 IOPS, 23.44 MiB/s [2024-11-04T14:49:59.220Z] 4001.00 IOPS, 15.63 MiB/s [2024-11-04T14:49:59.220Z] [2024-11-04 14:49:59.053825] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.080 [2024-11-04 14:49:59.053870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4e2e50 with addr=10.0.0.3, port=4420 00:23:50.080 [2024-11-04 14:49:59.053879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4e2e50 is same with the state(6) to be set 00:23:50.080 [2024-11-04 14:49:59.053893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e2e50 (9): Bad file descriptor 00:23:50.080 [2024-11-04 14:49:59.053909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:50.080 [2024-11-04 14:49:59.053914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:50.080 [2024-11-04 14:49:59.053920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:50.080 [2024-11-04 14:49:59.053926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:50.080 [2024-11-04 14:49:59.053933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:50.080 14:49:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:23:50.080 14:49:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:50.080 14:49:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:50.337 14:49:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:23:50.337 14:49:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:23:50.337 14:49:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:50.337 14:49:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:50.337 14:49:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:23:50.337 14:49:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:23:51.888 3000.75 IOPS, 11.72 MiB/s [2024-11-04T14:50:01.285Z] 2400.60 IOPS, 9.38 MiB/s [2024-11-04T14:50:01.286Z] [2024-11-04 14:50:01.054155] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.146 [2024-11-04 14:50:01.054200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4e2e50 with addr=10.0.0.3, port=4420 00:23:52.146 [2024-11-04 14:50:01.054208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4e2e50 is same with the state(6) to be set 00:23:52.146 [2024-11-04 14:50:01.054221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e2e50 (9): Bad file descriptor 00:23:52.146 [2024-11-04 14:50:01.054232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:52.146 [2024-11-04 14:50:01.054237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:52.146 [2024-11-04 14:50:01.054242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:52.146 [2024-11-04 14:50:01.054249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:52.146 [2024-11-04 14:50:01.054255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:54.014 2000.50 IOPS, 7.81 MiB/s [2024-11-04T14:50:03.154Z] 1714.71 IOPS, 6.70 MiB/s [2024-11-04T14:50:03.154Z] [2024-11-04 14:50:03.054400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:54.014 [2024-11-04 14:50:03.054445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:54.014 [2024-11-04 14:50:03.054451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:54.014 [2024-11-04 14:50:03.054457] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:23:54.014 [2024-11-04 14:50:03.054463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:54.965 1500.38 IOPS, 5.86 MiB/s 00:23:54.965 Latency(us) 00:23:54.965 [2024-11-04T14:50:04.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.965 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:54.965 Verification LBA range: start 0x0 length 0x4000 00:23:54.965 NVMe0n1 : 8.10 1481.87 5.79 15.80 0.00 85308.18 3049.94 7020619.62 00:23:54.965 [2024-11-04T14:50:04.105Z] =================================================================================================================== 00:23:54.965 [2024-11-04T14:50:04.105Z] Total : 1481.87 5.79 15.80 0.00 85308.18 3049.94 7020619.62 00:23:54.965 { 00:23:54.965 "results": [ 00:23:54.965 { 00:23:54.965 "job": "NVMe0n1", 00:23:54.965 "core_mask": "0x4", 00:23:54.965 "workload": "verify", 00:23:54.965 "status": "finished", 00:23:54.965 "verify_range": { 00:23:54.965 "start": 0, 00:23:54.965 "length": 16384 00:23:54.965 }, 00:23:54.965 "queue_depth": 128, 00:23:54.965 "io_size": 4096, 00:23:54.965 "runtime": 8.099912, 00:23:54.965 "iops": 1481.8679511579878, 00:23:54.965 "mibps": 5.78854668421089, 00:23:54.965 "io_failed": 128, 00:23:54.965 "io_timeout": 0, 00:23:54.965 "avg_latency_us": 85308.18102991064, 00:23:54.965 "min_latency_us": 3049.944615384615, 00:23:54.965 "max_latency_us": 7020619.618461538 00:23:54.965 } 00:23:54.965 ], 00:23:54.965 "core_count": 1 00:23:54.965 } 00:23:55.530 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:23:55.530 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:55.530 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:55.787 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:23:55.787 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:23:55.787 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:55.787 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:55.787 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:23:55.787 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 80500 00:23:55.787 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 80476 00:23:55.787 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 80476 ']' 00:23:55.787 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 80476 00:23:55.787 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:23:55.787 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:55.787 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80476 00:23:56.043 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:56.044 killing process with pid 80476 00:23:56.044 Received shutdown signal, test time was about 8.991418 seconds 00:23:56.044 00:23:56.044 Latency(us) 00:23:56.044 [2024-11-04T14:50:05.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.044 [2024-11-04T14:50:05.184Z] =================================================================================================================== 00:23:56.044 [2024-11-04T14:50:05.184Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.044 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:56.044 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80476' 00:23:56.044 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 80476 00:23:56.044 14:50:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 80476 00:23:56.044 14:50:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:56.301 [2024-11-04 14:50:05.230121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:56.301 14:50:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=80622 00:23:56.302 14:50:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:56.302 14:50:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 80622 /var/tmp/bdevperf.sock 00:23:56.302 14:50:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 80622 ']' 00:23:56.302 14:50:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.302 14:50:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:56.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.302 14:50:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.302 14:50:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:56.302 14:50:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:56.302 [2024-11-04 14:50:05.281334] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:23:56.302 [2024-11-04 14:50:05.281396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80622 ] 00:23:56.302 [2024-11-04 14:50:05.418656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.570 [2024-11-04 14:50:05.451511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.570 [2024-11-04 14:50:05.481312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:57.146 14:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:57.146 14:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:23:57.146 14:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:57.402 14:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:23:57.659 NVMe0n1 00:23:57.659 14:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=80640 00:23:57.659 14:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:57.659 14:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:23:57.659 Running I/O for 10 seconds... 00:23:58.591 14:50:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:58.851 9749.00 IOPS, 38.08 MiB/s [2024-11-04T14:50:07.991Z] [2024-11-04 14:50:07.810082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.851 [2024-11-04 14:50:07.810117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-11-04 14:50:07.810124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.851 [2024-11-04 14:50:07.810129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-11-04 14:50:07.810135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.851 [2024-11-04 14:50:07.810139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-11-04 14:50:07.810145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.851 [2024-11-04 14:50:07.810149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-11-04 14:50:07.810154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67e50 is same with the state(6) to be set 00:23:58.851 [2024-11-04 14:50:07.810318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.851 [2024-11-04 14:50:07.810327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-11-04 14:50:07.810339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-11-04 14:50:07.810344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-11-04 14:50:07.810350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-11-04 14:50:07.810355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-11-04 14:50:07.810361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-11-04 14:50:07.810365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-11-04 14:50:07.810371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-11-04 14:50:07.810376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-11-04 14:50:07.810383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-11-04 14:50:07.810387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-11-04 14:50:07.810393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-11-04 14:50:07.810397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-11-04 14:50:07.810403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-11-04 14:50:07.810408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-11-04 14:50:07.810841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-11-04 14:50:07.810845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.810851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.810856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.810861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.810866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.810872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.810876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.810882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.810886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.810892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.810896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.810902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.810906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.810912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.810916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.810922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.810927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.810933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.810937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.810944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.810948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.810954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.810959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.810964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.810969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.810975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.810979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.810985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.810990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.810995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.853 [2024-11-04 14:50:07.811228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.853 [2024-11-04 14:50:07.811233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.854 [2024-11-04 14:50:07.811508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.854 [2024-11-04 14:50:07.811519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.854 [2024-11-04 14:50:07.811529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.854 [2024-11-04 14:50:07.811540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.854 [2024-11-04 14:50:07.811551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.854 [2024-11-04 14:50:07.811561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.854 [2024-11-04 14:50:07.811571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.854 [2024-11-04 14:50:07.811581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.854 [2024-11-04 14:50:07.811592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.854 [2024-11-04 14:50:07.811602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.854 [2024-11-04 14:50:07.811619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.854 [2024-11-04 14:50:07.811630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.854 [2024-11-04 14:50:07.811636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.855 [2024-11-04 14:50:07.811641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.855 [2024-11-04 14:50:07.811646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.855 [2024-11-04 14:50:07.811651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.855 [2024-11-04 14:50:07.811657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.855 [2024-11-04 14:50:07.811663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.855 [2024-11-04 14:50:07.811669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.855 [2024-11-04 14:50:07.811673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.855 [2024-11-04 14:50:07.811679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.855 [2024-11-04 14:50:07.811685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.855 [2024-11-04 14:50:07.811690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5280 is same with the state(6) to be set 00:23:58.855 [2024-11-04 14:50:07.811696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.855 [2024-11-04 14:50:07.811700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.855 [2024-11-04 14:50:07.811704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87192 len:8 PRP1 0x0 PRP2 0x0 00:23:58.855 [2024-11-04 14:50:07.811709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.855 [2024-11-04 14:50:07.811917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:58.855 [2024-11-04 14:50:07.811929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf67e50 (9): Bad file descriptor 00:23:58.855 [2024-11-04 14:50:07.811986] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.855 [2024-11-04 14:50:07.811996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf67e50 with addr=10.0.0.3, port=4420 00:23:58.855 [2024-11-04 14:50:07.812001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67e50 is same with the state(6) to be set 00:23:58.855 [2024-11-04 14:50:07.812009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf67e50 (9): Bad file descriptor 00:23:58.855 [2024-11-04 14:50:07.812017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:58.855 [2024-11-04 14:50:07.812022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:58.855 [2024-11-04 14:50:07.812027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:58.855 [2024-11-04 14:50:07.812032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:58.855 [2024-11-04 14:50:07.812038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:58.855 14:50:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:23:59.787 5386.00 IOPS, 21.04 MiB/s [2024-11-04T14:50:08.927Z] [2024-11-04 14:50:08.812134] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.787 [2024-11-04 14:50:08.812167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf67e50 with addr=10.0.0.3, port=4420 00:23:59.787 [2024-11-04 14:50:08.812175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67e50 is same with the state(6) to be set 00:23:59.787 [2024-11-04 14:50:08.812187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf67e50 (9): Bad file descriptor 00:23:59.787 [2024-11-04 14:50:08.812197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:59.787 [2024-11-04 14:50:08.812201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:59.787 [2024-11-04 14:50:08.812207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:59.787 [2024-11-04 14:50:08.812213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:59.787 [2024-11-04 14:50:08.812218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:59.787 14:50:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:00.045 [2024-11-04 14:50:09.018498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:00.045 14:50:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 80640 00:24:00.868 3590.67 IOPS, 14.03 MiB/s [2024-11-04T14:50:10.008Z] [2024-11-04 14:50:09.832390] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:02.731 2693.00 IOPS, 10.52 MiB/s [2024-11-04T14:50:12.803Z] 4426.80 IOPS, 17.29 MiB/s [2024-11-04T14:50:13.743Z] 5834.50 IOPS, 22.79 MiB/s [2024-11-04T14:50:15.113Z] 6833.29 IOPS, 26.69 MiB/s [2024-11-04T14:50:16.045Z] 7593.12 IOPS, 29.66 MiB/s [2024-11-04T14:50:16.975Z] 8184.78 IOPS, 31.97 MiB/s [2024-11-04T14:50:16.975Z] 8661.70 IOPS, 33.83 MiB/s 00:24:07.835 Latency(us) 00:24:07.835 [2024-11-04T14:50:16.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.835 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:07.835 Verification LBA range: start 0x0 length 0x4000 00:24:07.835 NVMe0n1 : 10.01 8668.58 33.86 0.00 0.00 14747.31 964.14 3019898.88 00:24:07.835 [2024-11-04T14:50:16.975Z] =================================================================================================================== 00:24:07.835 [2024-11-04T14:50:16.975Z] Total : 8668.58 33.86 0.00 0.00 14747.31 964.14 3019898.88 00:24:07.835 { 00:24:07.835 "results": [ 00:24:07.835 { 00:24:07.835 "job": "NVMe0n1", 00:24:07.835 "core_mask": "0x4", 00:24:07.835 "workload": "verify", 00:24:07.835 "status": "finished", 00:24:07.835 "verify_range": { 00:24:07.835 "start": 0, 00:24:07.835 "length": 16384 00:24:07.835 }, 00:24:07.835 "queue_depth": 128, 00:24:07.835 "io_size": 4096, 00:24:07.835 "runtime": 10.006832, 00:24:07.835 "iops": 8668.577627764711, 00:24:07.835 "mibps": 33.8616313584559, 00:24:07.835 "io_failed": 0, 00:24:07.835 "io_timeout": 0, 00:24:07.835 "avg_latency_us": 14747.31092090433, 00:24:07.835 "min_latency_us": 964.1353846153846, 00:24:07.835 "max_latency_us": 3019898.88 00:24:07.835 } 00:24:07.835 ], 00:24:07.835 "core_count": 1 00:24:07.835 } 00:24:07.835 14:50:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=80750 00:24:07.835 14:50:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:24:07.835 14:50:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:07.835 Running I/O for 10 seconds... 00:24:08.767 14:50:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:09.026 9750.00 IOPS, 38.09 MiB/s [2024-11-04T14:50:18.166Z] [2024-11-04 14:50:17.918188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.026 [2024-11-04 14:50:17.918231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.026 [2024-11-04 14:50:17.918244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.026 [2024-11-04 14:50:17.918250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.026 [2024-11-04 14:50:17.918256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.026 [2024-11-04 14:50:17.918261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.026 [2024-11-04 14:50:17.918267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.026 [2024-11-04 14:50:17.918273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.026 [2024-11-04 14:50:17.918279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.026 [2024-11-04 14:50:17.918283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.026 [2024-11-04 14:50:17.918289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.026 [2024-11-04 14:50:17.918293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.026 [2024-11-04 14:50:17.918299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.026 [2024-11-04 14:50:17.918304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.027 [2024-11-04 14:50:17.918562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.027 [2024-11-04 14:50:17.918572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.027 [2024-11-04 14:50:17.918583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.027 [2024-11-04 14:50:17.918593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.027 [2024-11-04 14:50:17.918603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.027 [2024-11-04 14:50:17.918621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.027 [2024-11-04 14:50:17.918632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.027 [2024-11-04 14:50:17.918642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.027 [2024-11-04 14:50:17.918652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.027 [2024-11-04 14:50:17.918663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.027 [2024-11-04 14:50:17.918673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.027 [2024-11-04 14:50:17.918683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.027 [2024-11-04 14:50:17.918696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.027 [2024-11-04 14:50:17.918706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.027 [2024-11-04 14:50:17.918716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.027 [2024-11-04 14:50:17.918722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.027 [2024-11-04 14:50:17.918726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.028 [2024-11-04 14:50:17.918736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.028 [2024-11-04 14:50:17.918746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.028 [2024-11-04 14:50:17.918829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.918992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.918998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.919003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.919009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.919013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.919022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.919026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.919032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.919036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.919042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.919046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.919051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.919056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.919062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.919066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.919072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.919076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.919082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.919088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.919094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.919098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.919104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.919108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.919114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.028 [2024-11-04 14:50:17.919118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.028 [2024-11-04 14:50:17.919124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.029 [2024-11-04 14:50:17.919533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.029 [2024-11-04 14:50:17.919538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.030 [2024-11-04 14:50:17.919544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.030 [2024-11-04 14:50:17.919549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.030 [2024-11-04 14:50:17.919554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.030 [2024-11-04 14:50:17.919558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.030 [2024-11-04 14:50:17.919564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.030 [2024-11-04 14:50:17.919568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.030 [2024-11-04 14:50:17.919574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd6350 is same with the state(6) to be set 00:24:09.030 [2024-11-04 14:50:17.919580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:09.030 [2024-11-04 14:50:17.919584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:09.030 [2024-11-04 14:50:17.919589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86936 len:8 PRP1 0x0 PRP2 0x0 00:24:09.030 [2024-11-04 14:50:17.919594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.030 [2024-11-04 14:50:17.919804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:09.030 [2024-11-04 14:50:17.919854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf67e50 (9): Bad file descriptor 00:24:09.030 [2024-11-04 14:50:17.919917] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.030 [2024-11-04 14:50:17.919926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf67e50 with addr=10.0.0.3, port=4420 00:24:09.030 [2024-11-04 14:50:17.919930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67e50 is same with the state(6) to be set 00:24:09.030 [2024-11-04 14:50:17.919939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf67e50 (9): Bad file descriptor 00:24:09.030 [2024-11-04 14:50:17.919947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:09.030 [2024-11-04 14:50:17.919951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:09.030 [2024-11-04 14:50:17.919956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:09.030 [2024-11-04 14:50:17.919963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:09.030 [2024-11-04 14:50:17.919968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:09.030 14:50:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:24:09.962 5387.00 IOPS, 21.04 MiB/s [2024-11-04T14:50:19.102Z] [2024-11-04 14:50:18.920061] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-11-04 14:50:18.920251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf67e50 with addr=10.0.0.3, port=4420 00:24:09.962 [2024-11-04 14:50:18.920263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67e50 is same with the state(6) to be set 00:24:09.962 [2024-11-04 14:50:18.920278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf67e50 (9): Bad file descriptor 00:24:09.962 [2024-11-04 14:50:18.920288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:09.962 [2024-11-04 14:50:18.920292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:09.962 [2024-11-04 14:50:18.920298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:09.962 [2024-11-04 14:50:18.920305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:09.962 [2024-11-04 14:50:18.920311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:10.895 3591.33 IOPS, 14.03 MiB/s [2024-11-04T14:50:20.035Z] [2024-11-04 14:50:19.920400] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.895 [2024-11-04 14:50:19.920443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf67e50 with addr=10.0.0.3, port=4420 00:24:10.895 [2024-11-04 14:50:19.920450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67e50 is same with the state(6) to be set 00:24:10.895 [2024-11-04 14:50:19.920462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf67e50 (9): Bad file descriptor 00:24:10.895 [2024-11-04 14:50:19.920471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:10.895 [2024-11-04 14:50:19.920476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:10.895 [2024-11-04 14:50:19.920482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:10.895 [2024-11-04 14:50:19.920487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:10.895 [2024-11-04 14:50:19.920493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:11.871 2693.50 IOPS, 10.52 MiB/s [2024-11-04T14:50:21.011Z] [2024-11-04 14:50:20.923197] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.871 [2024-11-04 14:50:20.923365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf67e50 with addr=10.0.0.3, port=4420 00:24:11.871 [2024-11-04 14:50:20.923376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67e50 is same with the state(6) to be set 00:24:11.871 [2024-11-04 14:50:20.923546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf67e50 (9): Bad file descriptor 00:24:11.871 [2024-11-04 14:50:20.923723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:11.871 [2024-11-04 14:50:20.923730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:11.871 [2024-11-04 14:50:20.923735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:11.871 [2024-11-04 14:50:20.923742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:11.871 [2024-11-04 14:50:20.923748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:11.871 14:50:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:12.129 [2024-11-04 14:50:21.123361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:12.130 14:50:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 80750 00:24:12.954 2154.80 IOPS, 8.42 MiB/s [2024-11-04T14:50:22.094Z] [2024-11-04 14:50:21.949662] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:24:14.822 3744.83 IOPS, 14.63 MiB/s [2024-11-04T14:50:24.895Z] 5143.57 IOPS, 20.09 MiB/s [2024-11-04T14:50:25.837Z] 6184.62 IOPS, 24.16 MiB/s [2024-11-04T14:50:27.211Z] 7001.44 IOPS, 27.35 MiB/s [2024-11-04T14:50:27.211Z] 7674.10 IOPS, 29.98 MiB/s 00:24:18.071 Latency(us) 00:24:18.071 [2024-11-04T14:50:27.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.071 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:18.071 Verification LBA range: start 0x0 length 0x4000 00:24:18.071 NVMe0n1 : 10.01 7681.10 30.00 5397.89 0.00 9759.21 444.26 3006993.33 00:24:18.071 [2024-11-04T14:50:27.211Z] =================================================================================================================== 00:24:18.071 [2024-11-04T14:50:27.211Z] Total : 7681.10 30.00 5397.89 0.00 9759.21 0.00 3006993.33 00:24:18.071 { 00:24:18.071 "results": [ 00:24:18.071 { 00:24:18.071 "job": "NVMe0n1", 00:24:18.071 "core_mask": "0x4", 00:24:18.071 "workload": "verify", 00:24:18.071 "status": "finished", 00:24:18.071 "verify_range": { 00:24:18.071 "start": 0, 00:24:18.071 "length": 16384 00:24:18.071 }, 00:24:18.071 "queue_depth": 128, 00:24:18.071 "io_size": 4096, 00:24:18.071 "runtime": 10.006504, 00:24:18.071 "iops": 7681.104209821931, 00:24:18.072 "mibps": 30.00431331961692, 00:24:18.072 "io_failed": 54014, 00:24:18.072 "io_timeout": 0, 00:24:18.072 "avg_latency_us": 9759.207553702154, 00:24:18.072 "min_latency_us": 444.2584615384615, 00:24:18.072 "max_latency_us": 3006993.329230769 00:24:18.072 } 00:24:18.072 ], 00:24:18.072 "core_count": 1 00:24:18.072 } 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 80622 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 80622 ']' 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 80622 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80622 00:24:18.072 killing process with pid 80622 00:24:18.072 Received shutdown signal, test time was about 10.000000 seconds 00:24:18.072 00:24:18.072 Latency(us) 00:24:18.072 [2024-11-04T14:50:27.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.072 [2024-11-04T14:50:27.212Z] =================================================================================================================== 00:24:18.072 [2024-11-04T14:50:27.212Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80622' 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 80622 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 80622 00:24:18.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=80864 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 80864 /var/tmp/bdevperf.sock 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 80864 ']' 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:18.072 14:50:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:18.072 [2024-11-04 14:50:27.021205] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:24:18.072 [2024-11-04 14:50:27.021471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80864 ] 00:24:18.072 [2024-11-04 14:50:27.159678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.072 [2024-11-04 14:50:27.195756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.334 [2024-11-04 14:50:27.226493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:18.919 14:50:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:18.919 14:50:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:24:18.919 14:50:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80864 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:24:18.919 14:50:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=80880 00:24:18.919 14:50:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:24:19.177 14:50:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:19.434 NVMe0n1 00:24:19.434 14:50:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=80927 00:24:19.434 14:50:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:19.434 14:50:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:24:19.434 Running I/O for 10 seconds... 00:24:20.367 14:50:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:20.628 18923.00 IOPS, 73.92 MiB/s [2024-11-04T14:50:29.768Z] [2024-11-04 14:50:29.565520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.628 [2024-11-04 14:50:29.565710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.565997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.566001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.566004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.566008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.566012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.566016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.566020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.566024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.629 [2024-11-04 14:50:29.566028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.630 [2024-11-04 14:50:29.566032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.630 [2024-11-04 14:50:29.566036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.630 [2024-11-04 14:50:29.566040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.630 [2024-11-04 14:50:29.566044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.630 [2024-11-04 14:50:29.566048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.630 [2024-11-04 14:50:29.566052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.630 [2024-11-04 14:50:29.566056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.630 [2024-11-04 14:50:29.566061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.630 [2024-11-04 14:50:29.566065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.630 [2024-11-04 14:50:29.566069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.630 [2024-11-04 14:50:29.566073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.630 [2024-11-04 14:50:29.566076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.630 [2024-11-04 14:50:29.566080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108daa0 is same with the state(6) to be set 00:24:20.630 [2024-11-04 14:50:29.566119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-11-04 14:50:29.566464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.630 [2024-11-04 14:50:29.566470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:124072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:116528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.631 [2024-11-04 14:50:29.566911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-11-04 14:50:29.566916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.566922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.566927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.566933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.566937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.566943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.566947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.566953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.566958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.566964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.566968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.566974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.566978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.566984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.566989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.566995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.566999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.632 [2024-11-04 14:50:29.567308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.632 [2024-11-04 14:50:29.567313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:68648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.633 [2024-11-04 14:50:29.567530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717140 is same with the state(6) to be set 00:24:20.633 [2024-11-04 14:50:29.567541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.633 [2024-11-04 14:50:29.567545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.633 [2024-11-04 14:50:29.567551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42576 len:8 PRP1 0x0 PRP2 0x0 00:24:20.633 [2024-11-04 14:50:29.567556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.633 [2024-11-04 14:50:29.567785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:20.633 [2024-11-04 14:50:29.567833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a9e50 (9): Bad file descriptor 00:24:20.633 [2024-11-04 14:50:29.567895] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.633 [2024-11-04 14:50:29.567904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a9e50 with addr=10.0.0.3, port=4420 00:24:20.633 [2024-11-04 14:50:29.567909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9e50 is same with the state(6) to be set 00:24:20.633 [2024-11-04 14:50:29.567917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a9e50 (9): Bad file descriptor 00:24:20.633 [2024-11-04 14:50:29.567925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:20.633 [2024-11-04 14:50:29.567929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:20.633 [2024-11-04 14:50:29.567935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:20.633 [2024-11-04 14:50:29.567940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:20.633 [2024-11-04 14:50:29.567945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:20.633 14:50:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 80927 00:24:22.528 10478.50 IOPS, 40.93 MiB/s [2024-11-04T14:50:31.668Z] 6985.67 IOPS, 27.29 MiB/s [2024-11-04T14:50:31.668Z] [2024-11-04 14:50:31.568179] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.528 [2024-11-04 14:50:31.568212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a9e50 with addr=10.0.0.3, port=4420 00:24:22.528 [2024-11-04 14:50:31.568220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9e50 is same with the state(6) to be set 00:24:22.528 [2024-11-04 14:50:31.568233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a9e50 (9): Bad file descriptor 00:24:22.528 [2024-11-04 14:50:31.568244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:22.528 [2024-11-04 14:50:31.568249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:22.528 [2024-11-04 14:50:31.568256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:22.528 [2024-11-04 14:50:31.568263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:22.528 [2024-11-04 14:50:31.568269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:24.393 5239.25 IOPS, 20.47 MiB/s [2024-11-04T14:50:33.792Z] 4191.40 IOPS, 16.37 MiB/s [2024-11-04T14:50:33.792Z] [2024-11-04 14:50:33.568514] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.652 [2024-11-04 14:50:33.568558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a9e50 with addr=10.0.0.3, port=4420 00:24:24.652 [2024-11-04 14:50:33.568567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9e50 is same with the state(6) to be set 00:24:24.652 [2024-11-04 14:50:33.568579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a9e50 (9): Bad file descriptor 00:24:24.652 [2024-11-04 14:50:33.568590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:24.652 [2024-11-04 14:50:33.568594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:24.652 [2024-11-04 14:50:33.568600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:24.652 [2024-11-04 14:50:33.568613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:24.652 [2024-11-04 14:50:33.568619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:26.535 3492.83 IOPS, 13.64 MiB/s [2024-11-04T14:50:35.675Z] 2993.86 IOPS, 11.69 MiB/s [2024-11-04T14:50:35.675Z] [2024-11-04 14:50:35.568780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:26.535 [2024-11-04 14:50:35.568820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:26.535 [2024-11-04 14:50:35.568827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:26.535 [2024-11-04 14:50:35.568833] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:24:26.535 [2024-11-04 14:50:35.568840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:27.468 2619.62 IOPS, 10.23 MiB/s 00:24:27.468 Latency(us) 00:24:27.468 [2024-11-04T14:50:36.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.468 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:24:27.468 NVMe0n1 : 8.11 2584.20 10.09 15.78 0.00 49139.88 6351.95 7020619.62 00:24:27.468 [2024-11-04T14:50:36.608Z] =================================================================================================================== 00:24:27.468 [2024-11-04T14:50:36.608Z] Total : 2584.20 10.09 15.78 0.00 49139.88 6351.95 7020619.62 00:24:27.468 { 00:24:27.468 "results": [ 00:24:27.468 { 00:24:27.468 "job": "NVMe0n1", 00:24:27.468 "core_mask": "0x4", 00:24:27.468 "workload": "randread", 00:24:27.468 "status": "finished", 00:24:27.468 "queue_depth": 128, 00:24:27.468 "io_size": 4096, 00:24:27.468 "runtime": 8.109676, 00:24:27.468 "iops": 2584.19695188809, 00:24:27.468 "mibps": 10.094519343312852, 00:24:27.468 "io_failed": 128, 00:24:27.468 "io_timeout": 0, 00:24:27.468 "avg_latency_us": 49139.88466842998, 00:24:27.468 "min_latency_us": 6351.950769230769, 00:24:27.468 "max_latency_us": 7020619.618461538 00:24:27.468 } 00:24:27.468 ], 00:24:27.468 "core_count": 1 00:24:27.468 } 00:24:27.468 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:27.468 Attaching 5 probes... 00:24:27.468 1265.072187: reset bdev controller NVMe0 00:24:27.468 1265.146630: reconnect bdev controller NVMe0 00:24:27.468 3265.399907: reconnect delay bdev controller NVMe0 00:24:27.468 3265.413510: reconnect bdev controller NVMe0 00:24:27.468 5265.723745: reconnect delay bdev controller NVMe0 00:24:27.468 5265.738106: reconnect bdev controller NVMe0 00:24:27.468 7266.051781: reconnect delay bdev controller NVMe0 00:24:27.468 7266.068771: reconnect bdev controller NVMe0 00:24:27.468 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:24:27.468 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:24:27.468 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 80880 00:24:27.468 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:27.468 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 80864 00:24:27.468 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 80864 ']' 00:24:27.468 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 80864 00:24:27.468 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:24:27.468 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:27.468 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80864 00:24:27.726 killing process with pid 80864 00:24:27.726 Received shutdown signal, test time was about 8.164331 seconds 00:24:27.726 00:24:27.726 Latency(us) 00:24:27.726 [2024-11-04T14:50:36.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.726 [2024-11-04T14:50:36.866Z] =================================================================================================================== 00:24:27.726 [2024-11-04T14:50:36.866Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:27.726 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:27.726 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:27.726 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80864' 00:24:27.726 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 80864 00:24:27.726 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 80864 00:24:27.726 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.983 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:24:27.983 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:24:27.983 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:27.983 14:50:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:24:27.983 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:27.983 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:24:27.984 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:27.984 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:27.984 rmmod nvme_tcp 00:24:27.984 rmmod nvme_fabrics 00:24:27.984 rmmod nvme_keyring 00:24:27.984 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:27.984 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:24:27.984 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:24:27.984 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 80427 ']' 00:24:27.984 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 80427 00:24:27.984 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 80427 ']' 00:24:27.984 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 80427 00:24:27.984 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:24:27.984 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:27.984 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80427 00:24:27.984 killing process with pid 80427 00:24:27.984 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:27.984 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:27.984 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80427' 00:24:27.984 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 80427 00:24:27.984 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 80427 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:28.241 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:28.498 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:28.498 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:28.498 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.498 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.498 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.498 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:24:28.498 00:24:28.498 real 0m45.467s 00:24:28.498 user 2m13.558s 00:24:28.498 sys 0m4.169s 00:24:28.498 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:28.498 14:50:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:28.498 ************************************ 00:24:28.498 END TEST nvmf_timeout 00:24:28.498 ************************************ 00:24:28.498 14:50:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:24:28.498 14:50:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:28.498 ************************************ 00:24:28.498 END TEST nvmf_host 00:24:28.498 ************************************ 00:24:28.498 00:24:28.498 real 4m47.159s 00:24:28.498 user 12m34.106s 00:24:28.498 sys 0m51.975s 00:24:28.498 14:50:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:28.498 14:50:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.498 14:50:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:24:28.498 14:50:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:24:28.498 ************************************ 00:24:28.498 END TEST nvmf_tcp 00:24:28.498 ************************************ 00:24:28.498 00:24:28.498 real 11m35.896s 00:24:28.498 user 28m2.862s 00:24:28.498 sys 2m22.338s 00:24:28.498 14:50:37 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:28.498 14:50:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:28.498 14:50:37 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:24:28.498 14:50:37 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:28.498 14:50:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:28.498 14:50:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:28.498 14:50:37 -- common/autotest_common.sh@10 -- # set +x 00:24:28.498 ************************************ 00:24:28.498 START TEST nvmf_dif 00:24:28.498 ************************************ 00:24:28.498 14:50:37 nvmf_dif -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:28.498 * Looking for test storage... 00:24:28.498 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:28.498 14:50:37 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:28.498 14:50:37 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:24:28.498 14:50:37 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:28.756 14:50:37 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:28.756 14:50:37 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.756 14:50:37 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.756 14:50:37 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.756 14:50:37 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.756 14:50:37 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.756 14:50:37 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.756 14:50:37 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.756 14:50:37 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.756 14:50:37 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.756 14:50:37 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.756 14:50:37 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:24:28.757 14:50:37 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.757 14:50:37 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:28.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.757 --rc genhtml_branch_coverage=1 00:24:28.757 --rc genhtml_function_coverage=1 00:24:28.757 --rc genhtml_legend=1 00:24:28.757 --rc geninfo_all_blocks=1 00:24:28.757 --rc geninfo_unexecuted_blocks=1 00:24:28.757 00:24:28.757 ' 00:24:28.757 14:50:37 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:28.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.757 --rc genhtml_branch_coverage=1 00:24:28.757 --rc genhtml_function_coverage=1 00:24:28.757 --rc genhtml_legend=1 00:24:28.757 --rc geninfo_all_blocks=1 00:24:28.757 --rc geninfo_unexecuted_blocks=1 00:24:28.757 00:24:28.757 ' 00:24:28.757 14:50:37 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:28.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.757 --rc genhtml_branch_coverage=1 00:24:28.757 --rc genhtml_function_coverage=1 00:24:28.757 --rc genhtml_legend=1 00:24:28.757 --rc geninfo_all_blocks=1 00:24:28.757 --rc geninfo_unexecuted_blocks=1 00:24:28.757 00:24:28.757 ' 00:24:28.757 14:50:37 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:28.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.757 --rc genhtml_branch_coverage=1 00:24:28.757 --rc genhtml_function_coverage=1 00:24:28.757 --rc genhtml_legend=1 00:24:28.757 --rc geninfo_all_blocks=1 00:24:28.757 --rc geninfo_unexecuted_blocks=1 00:24:28.757 00:24:28.757 ' 00:24:28.757 14:50:37 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.757 14:50:37 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.757 14:50:37 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.757 14:50:37 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.757 14:50:37 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.757 14:50:37 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:24:28.757 14:50:37 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:28.757 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:28.757 14:50:37 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:24:28.757 14:50:37 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:24:28.757 14:50:37 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:24:28.757 14:50:37 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:24:28.757 14:50:37 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.757 14:50:37 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:28.757 14:50:37 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:28.757 Cannot find device "nvmf_init_br" 00:24:28.757 14:50:37 nvmf_dif -- nvmf/common.sh@162 -- # true 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:28.758 Cannot find device "nvmf_init_br2" 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@163 -- # true 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:28.758 Cannot find device "nvmf_tgt_br" 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@164 -- # true 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:28.758 Cannot find device "nvmf_tgt_br2" 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@165 -- # true 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:28.758 Cannot find device "nvmf_init_br" 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@166 -- # true 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:28.758 Cannot find device "nvmf_init_br2" 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@167 -- # true 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:28.758 Cannot find device "nvmf_tgt_br" 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@168 -- # true 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:28.758 Cannot find device "nvmf_tgt_br2" 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@169 -- # true 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:28.758 Cannot find device "nvmf_br" 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@170 -- # true 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:28.758 Cannot find device "nvmf_init_if" 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@171 -- # true 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:28.758 Cannot find device "nvmf_init_if2" 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@172 -- # true 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:28.758 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@173 -- # true 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:28.758 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@174 -- # true 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:28.758 14:50:37 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:29.017 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:29.017 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:24:29.017 00:24:29.017 --- 10.0.0.3 ping statistics --- 00:24:29.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.017 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:29.017 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:29.017 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:24:29.017 00:24:29.017 --- 10.0.0.4 ping statistics --- 00:24:29.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.017 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:29.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:29.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:24:29.017 00:24:29.017 --- 10.0.0.1 ping statistics --- 00:24:29.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.017 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:29.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:29.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:24:29.017 00:24:29.017 --- 10.0.0.2 ping statistics --- 00:24:29.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.017 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:24:29.017 14:50:37 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:29.275 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:29.275 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:29.275 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:29.275 14:50:38 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.275 14:50:38 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:29.275 14:50:38 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:29.275 14:50:38 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.275 14:50:38 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:29.275 14:50:38 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:29.275 14:50:38 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:24:29.275 14:50:38 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:24:29.275 14:50:38 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:29.275 14:50:38 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:29.275 14:50:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:29.275 14:50:38 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=81412 00:24:29.275 14:50:38 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:29.275 14:50:38 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 81412 00:24:29.275 14:50:38 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 81412 ']' 00:24:29.275 14:50:38 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.275 14:50:38 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:29.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.275 14:50:38 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.275 14:50:38 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:29.275 14:50:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:29.275 [2024-11-04 14:50:38.333704] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:24:29.275 [2024-11-04 14:50:38.333753] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.533 [2024-11-04 14:50:38.473524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.533 [2024-11-04 14:50:38.508832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.533 [2024-11-04 14:50:38.508871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.533 [2024-11-04 14:50:38.508877] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.533 [2024-11-04 14:50:38.508883] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.533 [2024-11-04 14:50:38.508888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.533 [2024-11-04 14:50:38.509140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.533 [2024-11-04 14:50:38.540261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:30.170 14:50:39 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:30.170 14:50:39 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:24:30.170 14:50:39 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:30.170 14:50:39 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:30.170 14:50:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:30.428 14:50:39 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.428 14:50:39 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:24:30.428 14:50:39 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:24:30.428 14:50:39 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.428 14:50:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:30.428 [2024-11-04 14:50:39.335899] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.428 14:50:39 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.428 14:50:39 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:24:30.428 14:50:39 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:30.428 14:50:39 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:30.428 14:50:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:30.428 ************************************ 00:24:30.428 START TEST fio_dif_1_default 00:24:30.428 ************************************ 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:30.428 bdev_null0 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:30.428 [2024-11-04 14:50:39.375970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:24:30.428 14:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:30.429 { 00:24:30.429 "params": { 00:24:30.429 "name": "Nvme$subsystem", 00:24:30.429 "trtype": "$TEST_TRANSPORT", 00:24:30.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.429 "adrfam": "ipv4", 00:24:30.429 "trsvcid": "$NVMF_PORT", 00:24:30.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.429 "hdgst": ${hdgst:-false}, 00:24:30.429 "ddgst": ${ddgst:-false} 00:24:30.429 }, 00:24:30.429 "method": "bdev_nvme_attach_controller" 00:24:30.429 } 00:24:30.429 EOF 00:24:30.429 )") 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:30.429 "params": { 00:24:30.429 "name": "Nvme0", 00:24:30.429 "trtype": "tcp", 00:24:30.429 "traddr": "10.0.0.3", 00:24:30.429 "adrfam": "ipv4", 00:24:30.429 "trsvcid": "4420", 00:24:30.429 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:30.429 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:30.429 "hdgst": false, 00:24:30.429 "ddgst": false 00:24:30.429 }, 00:24:30.429 "method": "bdev_nvme_attach_controller" 00:24:30.429 }' 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:30.429 14:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:30.429 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:30.429 fio-3.35 00:24:30.429 Starting 1 thread 00:24:42.624 00:24:42.624 filename0: (groupid=0, jobs=1): err= 0: pid=81480: Mon Nov 4 14:50:50 2024 00:24:42.624 read: IOPS=12.0k, BW=47.0MiB/s (49.3MB/s)(470MiB/10001msec) 00:24:42.624 slat (nsec): min=5398, max=39466, avg=6113.17, stdev=1136.59 00:24:42.624 clat (usec): min=269, max=4329, avg=315.98, stdev=50.66 00:24:42.624 lat (usec): min=275, max=4362, avg=322.09, stdev=51.35 00:24:42.624 clat percentiles (usec): 00:24:42.624 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 281], 20.00th=[ 285], 00:24:42.624 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 302], 00:24:42.624 | 70.00th=[ 310], 80.00th=[ 367], 90.00th=[ 383], 95.00th=[ 392], 00:24:42.624 | 99.00th=[ 429], 99.50th=[ 445], 99.90th=[ 519], 99.95th=[ 1074], 00:24:42.624 | 99.99th=[ 1156] 00:24:42.624 bw ( KiB/s): min=38592, max=52544, per=99.66%, avg=47947.79, stdev=5858.55, samples=19 00:24:42.624 iops : min= 9648, max=13136, avg=11986.95, stdev=1464.64, samples=19 00:24:42.624 lat (usec) : 500=99.88%, 750=0.06%, 1000=0.01% 00:24:42.624 lat (msec) : 2=0.05%, 10=0.01% 00:24:42.624 cpu : usr=88.17%, sys=10.79%, ctx=19, majf=0, minf=9 00:24:42.624 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:42.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:42.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:42.624 issued rwts: total=120284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:42.624 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:42.624 00:24:42.624 Run status group 0 (all jobs): 00:24:42.624 READ: bw=47.0MiB/s (49.3MB/s), 47.0MiB/s-47.0MiB/s (49.3MB/s-49.3MB/s), io=470MiB (493MB), run=10001-10001msec 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.624 00:24:42.624 real 0m10.822s 00:24:42.624 user 0m9.306s 00:24:42.624 sys 0m1.251s 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:42.624 ************************************ 00:24:42.624 END TEST fio_dif_1_default 00:24:42.624 ************************************ 00:24:42.624 14:50:50 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:24:42.624 14:50:50 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:42.624 14:50:50 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:42.624 14:50:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:42.624 ************************************ 00:24:42.624 START TEST fio_dif_1_multi_subsystems 00:24:42.624 ************************************ 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:42.624 bdev_null0 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:42.624 [2024-11-04 14:50:50.240351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:42.624 bdev_null1 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:42.624 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:42.625 { 00:24:42.625 "params": { 00:24:42.625 "name": "Nvme$subsystem", 00:24:42.625 "trtype": "$TEST_TRANSPORT", 00:24:42.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:42.625 "adrfam": "ipv4", 00:24:42.625 "trsvcid": "$NVMF_PORT", 00:24:42.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:42.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:42.625 "hdgst": ${hdgst:-false}, 00:24:42.625 "ddgst": ${ddgst:-false} 00:24:42.625 }, 00:24:42.625 "method": "bdev_nvme_attach_controller" 00:24:42.625 } 00:24:42.625 EOF 00:24:42.625 )") 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:42.625 { 00:24:42.625 "params": { 00:24:42.625 "name": "Nvme$subsystem", 00:24:42.625 "trtype": "$TEST_TRANSPORT", 00:24:42.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:42.625 "adrfam": "ipv4", 00:24:42.625 "trsvcid": "$NVMF_PORT", 00:24:42.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:42.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:42.625 "hdgst": ${hdgst:-false}, 00:24:42.625 "ddgst": ${ddgst:-false} 00:24:42.625 }, 00:24:42.625 "method": "bdev_nvme_attach_controller" 00:24:42.625 } 00:24:42.625 EOF 00:24:42.625 )") 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:42.625 "params": { 00:24:42.625 "name": "Nvme0", 00:24:42.625 "trtype": "tcp", 00:24:42.625 "traddr": "10.0.0.3", 00:24:42.625 "adrfam": "ipv4", 00:24:42.625 "trsvcid": "4420", 00:24:42.625 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:42.625 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:42.625 "hdgst": false, 00:24:42.625 "ddgst": false 00:24:42.625 }, 00:24:42.625 "method": "bdev_nvme_attach_controller" 00:24:42.625 },{ 00:24:42.625 "params": { 00:24:42.625 "name": "Nvme1", 00:24:42.625 "trtype": "tcp", 00:24:42.625 "traddr": "10.0.0.3", 00:24:42.625 "adrfam": "ipv4", 00:24:42.625 "trsvcid": "4420", 00:24:42.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:42.625 "hdgst": false, 00:24:42.625 "ddgst": false 00:24:42.625 }, 00:24:42.625 "method": "bdev_nvme_attach_controller" 00:24:42.625 }' 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:42.625 14:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:42.625 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:42.625 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:42.625 fio-3.35 00:24:42.625 Starting 2 threads 00:24:52.605 00:24:52.605 filename0: (groupid=0, jobs=1): err= 0: pid=81646: Mon Nov 4 14:51:00 2024 00:24:52.605 read: IOPS=6858, BW=26.8MiB/s (28.1MB/s)(268MiB/10001msec) 00:24:52.605 slat (nsec): min=5567, max=45070, avg=8415.55, stdev=4794.85 00:24:52.605 clat (usec): min=282, max=999, avg=560.75, stdev=27.68 00:24:52.605 lat (usec): min=288, max=1011, avg=569.16, stdev=29.19 00:24:52.605 clat percentiles (usec): 00:24:52.605 | 1.00th=[ 519], 5.00th=[ 529], 10.00th=[ 537], 20.00th=[ 537], 00:24:52.605 | 30.00th=[ 545], 40.00th=[ 553], 50.00th=[ 553], 60.00th=[ 562], 00:24:52.605 | 70.00th=[ 570], 80.00th=[ 578], 90.00th=[ 594], 95.00th=[ 611], 00:24:52.605 | 99.00th=[ 644], 99.50th=[ 660], 99.90th=[ 693], 99.95th=[ 717], 00:24:52.605 | 99.99th=[ 955] 00:24:52.605 bw ( KiB/s): min=26336, max=28736, per=50.07%, avg=27456.00, stdev=620.87, samples=19 00:24:52.605 iops : min= 6584, max= 7184, avg=6864.00, stdev=155.22, samples=19 00:24:52.605 lat (usec) : 500=0.15%, 750=99.81%, 1000=0.04% 00:24:52.605 cpu : usr=90.83%, sys=8.42%, ctx=19, majf=0, minf=0 00:24:52.605 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:52.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.605 issued rwts: total=68592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.605 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:52.605 filename1: (groupid=0, jobs=1): err= 0: pid=81647: Mon Nov 4 14:51:00 2024 00:24:52.605 read: IOPS=6850, BW=26.8MiB/s (28.1MB/s)(268MiB/10001msec) 00:24:52.605 slat (nsec): min=5570, max=60606, avg=8914.20, stdev=4760.12 00:24:52.605 clat (usec): min=333, max=1873, avg=560.38, stdev=33.69 00:24:52.605 lat (usec): min=341, max=1879, avg=569.29, stdev=35.16 00:24:52.605 clat percentiles (usec): 00:24:52.605 | 1.00th=[ 490], 5.00th=[ 510], 10.00th=[ 523], 20.00th=[ 537], 00:24:52.605 | 30.00th=[ 545], 40.00th=[ 553], 50.00th=[ 562], 60.00th=[ 570], 00:24:52.605 | 70.00th=[ 578], 80.00th=[ 586], 90.00th=[ 603], 95.00th=[ 611], 00:24:52.605 | 99.00th=[ 644], 99.50th=[ 660], 99.90th=[ 750], 99.95th=[ 816], 00:24:52.605 | 99.99th=[ 963] 00:24:52.605 bw ( KiB/s): min=26336, max=28160, per=50.01%, avg=27422.32, stdev=571.68, samples=19 00:24:52.605 iops : min= 6584, max= 7040, avg=6855.58, stdev=142.92, samples=19 00:24:52.605 lat (usec) : 500=2.63%, 750=97.27%, 1000=0.09% 00:24:52.605 lat (msec) : 2=0.01% 00:24:52.605 cpu : usr=91.00%, sys=8.17%, ctx=28, majf=0, minf=0 00:24:52.606 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:52.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.606 issued rwts: total=68508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.606 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:52.606 00:24:52.606 Run status group 0 (all jobs): 00:24:52.606 READ: bw=53.5MiB/s (56.1MB/s), 26.8MiB/s-26.8MiB/s (28.1MB/s-28.1MB/s), io=536MiB (562MB), run=10001-10001msec 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.606 00:24:52.606 real 0m10.927s 00:24:52.606 user 0m18.777s 00:24:52.606 sys 0m1.844s 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:52.606 14:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:52.606 ************************************ 00:24:52.606 END TEST fio_dif_1_multi_subsystems 00:24:52.606 ************************************ 00:24:52.606 14:51:01 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:24:52.606 14:51:01 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:52.606 14:51:01 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:52.606 14:51:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:52.606 ************************************ 00:24:52.606 START TEST fio_dif_rand_params 00:24:52.606 ************************************ 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:52.606 bdev_null0 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:52.606 [2024-11-04 14:51:01.209528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:52.606 { 00:24:52.606 "params": { 00:24:52.606 "name": "Nvme$subsystem", 00:24:52.606 "trtype": "$TEST_TRANSPORT", 00:24:52.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.606 "adrfam": "ipv4", 00:24:52.606 "trsvcid": "$NVMF_PORT", 00:24:52.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.606 "hdgst": ${hdgst:-false}, 00:24:52.606 "ddgst": ${ddgst:-false} 00:24:52.606 }, 00:24:52.606 "method": "bdev_nvme_attach_controller" 00:24:52.606 } 00:24:52.606 EOF 00:24:52.606 )") 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:52.606 "params": { 00:24:52.606 "name": "Nvme0", 00:24:52.606 "trtype": "tcp", 00:24:52.606 "traddr": "10.0.0.3", 00:24:52.606 "adrfam": "ipv4", 00:24:52.606 "trsvcid": "4420", 00:24:52.606 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:52.606 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:52.606 "hdgst": false, 00:24:52.606 "ddgst": false 00:24:52.606 }, 00:24:52.606 "method": "bdev_nvme_attach_controller" 00:24:52.606 }' 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:52.606 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:52.607 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:52.607 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:52.607 14:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:52.607 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:52.607 ... 00:24:52.607 fio-3.35 00:24:52.607 Starting 3 threads 00:24:57.867 00:24:57.867 filename0: (groupid=0, jobs=1): err= 0: pid=81802: Mon Nov 4 14:51:06 2024 00:24:57.867 read: IOPS=354, BW=44.3MiB/s (46.4MB/s)(222MiB/5006msec) 00:24:57.867 slat (nsec): min=5396, max=25169, avg=7505.12, stdev=1563.89 00:24:57.867 clat (usec): min=4630, max=10503, avg=8452.80, stdev=295.29 00:24:57.867 lat (usec): min=4636, max=10511, avg=8460.31, stdev=295.42 00:24:57.867 clat percentiles (usec): 00:24:57.867 | 1.00th=[ 8160], 5.00th=[ 8160], 10.00th=[ 8160], 20.00th=[ 8160], 00:24:57.867 | 30.00th=[ 8225], 40.00th=[ 8356], 50.00th=[ 8455], 60.00th=[ 8586], 00:24:57.867 | 70.00th=[ 8717], 80.00th=[ 8717], 90.00th=[ 8717], 95.00th=[ 8717], 00:24:57.867 | 99.00th=[ 8979], 99.50th=[ 9503], 99.90th=[10552], 99.95th=[10552], 00:24:57.867 | 99.99th=[10552] 00:24:57.867 bw ( KiB/s): min=43776, max=46848, per=33.35%, avg=45303.10, stdev=1031.77, samples=10 00:24:57.867 iops : min= 342, max= 366, avg=353.90, stdev= 8.09, samples=10 00:24:57.867 lat (msec) : 10=99.83%, 20=0.17% 00:24:57.867 cpu : usr=91.41%, sys=8.13%, ctx=7, majf=0, minf=0 00:24:57.867 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:57.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.867 issued rwts: total=1773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.867 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:57.867 filename0: (groupid=0, jobs=1): err= 0: pid=81803: Mon Nov 4 14:51:06 2024 00:24:57.867 read: IOPS=353, BW=44.2MiB/s (46.4MB/s)(221MiB/5001msec) 00:24:57.867 slat (nsec): min=5400, max=32758, avg=8950.95, stdev=4467.81 00:24:57.867 clat (usec): min=6950, max=10521, avg=8455.76, stdev=258.79 00:24:57.867 lat (usec): min=6955, max=10533, avg=8464.72, stdev=259.92 00:24:57.867 clat percentiles (usec): 00:24:57.867 | 1.00th=[ 8160], 5.00th=[ 8160], 10.00th=[ 8160], 20.00th=[ 8160], 00:24:57.867 | 30.00th=[ 8225], 40.00th=[ 8356], 50.00th=[ 8455], 60.00th=[ 8586], 00:24:57.867 | 70.00th=[ 8717], 80.00th=[ 8717], 90.00th=[ 8717], 95.00th=[ 8717], 00:24:57.867 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[10552], 99.95th=[10552], 00:24:57.867 | 99.99th=[10552] 00:24:57.867 bw ( KiB/s): min=43776, max=46848, per=33.28%, avg=45216.22, stdev=1100.94, samples=9 00:24:57.867 iops : min= 342, max= 366, avg=353.11, stdev= 8.71, samples=9 00:24:57.867 lat (msec) : 10=99.83%, 20=0.17% 00:24:57.867 cpu : usr=91.66%, sys=7.88%, ctx=51, majf=0, minf=0 00:24:57.867 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:57.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.867 issued rwts: total=1770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.867 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:57.867 filename0: (groupid=0, jobs=1): err= 0: pid=81804: Mon Nov 4 14:51:06 2024 00:24:57.867 read: IOPS=353, BW=44.2MiB/s (46.4MB/s)(221MiB/5002msec) 00:24:57.867 slat (nsec): min=3859, max=46716, avg=9698.26, stdev=4751.12 00:24:57.867 clat (usec): min=5885, max=10521, avg=8454.97, stdev=281.03 00:24:57.867 lat (usec): min=5893, max=10533, avg=8464.67, stdev=282.13 00:24:57.867 clat percentiles (usec): 00:24:57.867 | 1.00th=[ 8160], 5.00th=[ 8160], 10.00th=[ 8160], 20.00th=[ 8160], 00:24:57.867 | 30.00th=[ 8225], 40.00th=[ 8356], 50.00th=[ 8455], 60.00th=[ 8586], 00:24:57.867 | 70.00th=[ 8717], 80.00th=[ 8717], 90.00th=[ 8717], 95.00th=[ 8717], 00:24:57.867 | 99.00th=[ 8979], 99.50th=[ 9896], 99.90th=[10552], 99.95th=[10552], 00:24:57.867 | 99.99th=[10552] 00:24:57.867 bw ( KiB/s): min=43776, max=46848, per=33.28%, avg=45206.56, stdev=1114.60, samples=9 00:24:57.867 iops : min= 342, max= 366, avg=353.11, stdev= 8.71, samples=9 00:24:57.867 lat (msec) : 10=99.66%, 20=0.34% 00:24:57.867 cpu : usr=90.90%, sys=8.36%, ctx=50, majf=0, minf=0 00:24:57.867 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:57.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.867 issued rwts: total=1770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.867 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:57.867 00:24:57.867 Run status group 0 (all jobs): 00:24:57.867 READ: bw=133MiB/s (139MB/s), 44.2MiB/s-44.3MiB/s (46.4MB/s-46.4MB/s), io=664MiB (696MB), run=5001-5006msec 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:24:57.867 14:51:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.868 14:51:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:57.868 bdev_null0 00:24:57.868 14:51:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.868 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:57.868 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.868 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:58.126 [2024-11-04 14:51:07.019679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:58.126 bdev_null1 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:58.126 bdev_null2 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:58.126 { 00:24:58.126 "params": { 00:24:58.126 "name": "Nvme$subsystem", 00:24:58.126 "trtype": "$TEST_TRANSPORT", 00:24:58.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.126 "adrfam": "ipv4", 00:24:58.126 "trsvcid": "$NVMF_PORT", 00:24:58.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.126 "hdgst": ${hdgst:-false}, 00:24:58.126 "ddgst": ${ddgst:-false} 00:24:58.126 }, 00:24:58.126 "method": "bdev_nvme_attach_controller" 00:24:58.126 } 00:24:58.126 EOF 00:24:58.126 )") 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:58.126 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:58.127 { 00:24:58.127 "params": { 00:24:58.127 "name": "Nvme$subsystem", 00:24:58.127 "trtype": "$TEST_TRANSPORT", 00:24:58.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.127 "adrfam": "ipv4", 00:24:58.127 "trsvcid": "$NVMF_PORT", 00:24:58.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.127 "hdgst": ${hdgst:-false}, 00:24:58.127 "ddgst": ${ddgst:-false} 00:24:58.127 }, 00:24:58.127 "method": "bdev_nvme_attach_controller" 00:24:58.127 } 00:24:58.127 EOF 00:24:58.127 )") 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:58.127 { 00:24:58.127 "params": { 00:24:58.127 "name": "Nvme$subsystem", 00:24:58.127 "trtype": "$TEST_TRANSPORT", 00:24:58.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.127 "adrfam": "ipv4", 00:24:58.127 "trsvcid": "$NVMF_PORT", 00:24:58.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.127 "hdgst": ${hdgst:-false}, 00:24:58.127 "ddgst": ${ddgst:-false} 00:24:58.127 }, 00:24:58.127 "method": "bdev_nvme_attach_controller" 00:24:58.127 } 00:24:58.127 EOF 00:24:58.127 )") 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:58.127 "params": { 00:24:58.127 "name": "Nvme0", 00:24:58.127 "trtype": "tcp", 00:24:58.127 "traddr": "10.0.0.3", 00:24:58.127 "adrfam": "ipv4", 00:24:58.127 "trsvcid": "4420", 00:24:58.127 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:58.127 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:58.127 "hdgst": false, 00:24:58.127 "ddgst": false 00:24:58.127 }, 00:24:58.127 "method": "bdev_nvme_attach_controller" 00:24:58.127 },{ 00:24:58.127 "params": { 00:24:58.127 "name": "Nvme1", 00:24:58.127 "trtype": "tcp", 00:24:58.127 "traddr": "10.0.0.3", 00:24:58.127 "adrfam": "ipv4", 00:24:58.127 "trsvcid": "4420", 00:24:58.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:58.127 "hdgst": false, 00:24:58.127 "ddgst": false 00:24:58.127 }, 00:24:58.127 "method": "bdev_nvme_attach_controller" 00:24:58.127 },{ 00:24:58.127 "params": { 00:24:58.127 "name": "Nvme2", 00:24:58.127 "trtype": "tcp", 00:24:58.127 "traddr": "10.0.0.3", 00:24:58.127 "adrfam": "ipv4", 00:24:58.127 "trsvcid": "4420", 00:24:58.127 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:58.127 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:58.127 "hdgst": false, 00:24:58.127 "ddgst": false 00:24:58.127 }, 00:24:58.127 "method": "bdev_nvme_attach_controller" 00:24:58.127 }' 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:58.127 14:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:58.385 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:58.385 ... 00:24:58.385 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:58.385 ... 00:24:58.385 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:58.385 ... 00:24:58.385 fio-3.35 00:24:58.385 Starting 24 threads 00:25:10.572 00:25:10.572 filename0: (groupid=0, jobs=1): err= 0: pid=81899: Mon Nov 4 14:51:17 2024 00:25:10.572 read: IOPS=252, BW=1011KiB/s (1035kB/s)(9.91MiB/10035msec) 00:25:10.572 slat (nsec): min=4268, max=83249, avg=9812.80, stdev=6253.37 00:25:10.572 clat (msec): min=7, max=129, avg=63.20, stdev=18.67 00:25:10.572 lat (msec): min=7, max=129, avg=63.21, stdev=18.67 00:25:10.572 clat percentiles (msec): 00:25:10.572 | 1.00th=[ 12], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 48], 00:25:10.572 | 30.00th=[ 53], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 69], 00:25:10.572 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 86], 95.00th=[ 92], 00:25:10.572 | 99.00th=[ 102], 99.50th=[ 130], 99.90th=[ 130], 99.95th=[ 130], 00:25:10.572 | 99.99th=[ 130] 00:25:10.572 bw ( KiB/s): min= 784, max= 1408, per=4.16%, avg=1008.00, stdev=169.67, samples=20 00:25:10.572 iops : min= 196, max= 352, avg=252.00, stdev=42.42, samples=20 00:25:10.572 lat (msec) : 10=0.55%, 20=1.34%, 50=24.29%, 100=72.75%, 250=1.06% 00:25:10.572 cpu : usr=40.41%, sys=1.22%, ctx=1310, majf=0, minf=9 00:25:10.572 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=80.6%, 16=16.6%, 32=0.0%, >=64=0.0% 00:25:10.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.572 complete : 0=0.0%, 4=88.3%, 8=11.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.572 issued rwts: total=2536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.572 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.572 filename0: (groupid=0, jobs=1): err= 0: pid=81900: Mon Nov 4 14:51:17 2024 00:25:10.572 read: IOPS=251, BW=1007KiB/s (1031kB/s)(9.88MiB/10049msec) 00:25:10.572 slat (usec): min=5, max=3954, avg=12.00, stdev=81.45 00:25:10.572 clat (usec): min=1966, max=119977, avg=63450.49, stdev=22636.10 00:25:10.572 lat (usec): min=1973, max=119988, avg=63462.48, stdev=22635.33 00:25:10.572 clat percentiles (msec): 00:25:10.572 | 1.00th=[ 3], 5.00th=[ 29], 10.00th=[ 36], 20.00th=[ 45], 00:25:10.572 | 30.00th=[ 53], 40.00th=[ 57], 50.00th=[ 63], 60.00th=[ 74], 00:25:10.572 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 89], 95.00th=[ 95], 00:25:10.572 | 99.00th=[ 110], 99.50th=[ 110], 99.90th=[ 113], 99.95th=[ 121], 00:25:10.572 | 99.99th=[ 121] 00:25:10.572 bw ( KiB/s): min= 656, max= 2035, per=4.15%, avg=1004.55, stdev=324.49, samples=20 00:25:10.572 iops : min= 164, max= 508, avg=251.10, stdev=81.00, samples=20 00:25:10.572 lat (msec) : 2=0.20%, 4=1.62%, 10=1.98%, 20=0.63%, 50=21.51% 00:25:10.572 lat (msec) : 100=71.93%, 250=2.14% 00:25:10.572 cpu : usr=44.21%, sys=1.68%, ctx=1229, majf=0, minf=9 00:25:10.572 IO depths : 1=0.2%, 2=2.7%, 4=10.4%, 8=72.0%, 16=14.8%, 32=0.0%, >=64=0.0% 00:25:10.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.572 complete : 0=0.0%, 4=90.3%, 8=7.5%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.572 issued rwts: total=2529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.572 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.572 filename0: (groupid=0, jobs=1): err= 0: pid=81901: Mon Nov 4 14:51:17 2024 00:25:10.572 read: IOPS=254, BW=1018KiB/s (1042kB/s)(9.97MiB/10030msec) 00:25:10.572 slat (usec): min=4, max=7204, avg=20.89, stdev=215.25 00:25:10.572 clat (msec): min=20, max=115, avg=62.76, stdev=17.65 00:25:10.572 lat (msec): min=20, max=115, avg=62.78, stdev=17.65 00:25:10.572 clat percentiles (msec): 00:25:10.572 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 48], 00:25:10.572 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 65], 00:25:10.572 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 87], 95.00th=[ 92], 00:25:10.572 | 99.00th=[ 97], 99.50th=[ 101], 99.90th=[ 113], 99.95th=[ 116], 00:25:10.572 | 99.99th=[ 116] 00:25:10.572 bw ( KiB/s): min= 768, max= 1264, per=4.19%, avg=1015.60, stdev=179.65, samples=20 00:25:10.572 iops : min= 192, max= 316, avg=253.90, stdev=44.91, samples=20 00:25:10.572 lat (msec) : 50=25.39%, 100=73.75%, 250=0.86% 00:25:10.572 cpu : usr=44.89%, sys=1.23%, ctx=1262, majf=0, minf=9 00:25:10.572 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=79.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:25:10.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.572 complete : 0=0.0%, 4=88.3%, 8=10.9%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.572 issued rwts: total=2552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.572 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.572 filename0: (groupid=0, jobs=1): err= 0: pid=81902: Mon Nov 4 14:51:17 2024 00:25:10.572 read: IOPS=254, BW=1018KiB/s (1043kB/s)(9.97MiB/10024msec) 00:25:10.572 slat (usec): min=5, max=8015, avg=18.30, stdev=194.19 00:25:10.572 clat (msec): min=23, max=120, avg=62.72, stdev=17.20 00:25:10.572 lat (msec): min=23, max=120, avg=62.74, stdev=17.20 00:25:10.572 clat percentiles (msec): 00:25:10.572 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 48], 00:25:10.572 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 65], 00:25:10.572 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 85], 95.00th=[ 93], 00:25:10.572 | 99.00th=[ 99], 99.50th=[ 100], 99.90th=[ 108], 99.95th=[ 109], 00:25:10.572 | 99.99th=[ 121] 00:25:10.572 bw ( KiB/s): min= 792, max= 1280, per=4.20%, avg=1016.80, stdev=152.34, samples=20 00:25:10.572 iops : min= 198, max= 320, avg=254.20, stdev=38.09, samples=20 00:25:10.572 lat (msec) : 50=30.88%, 100=68.85%, 250=0.27% 00:25:10.572 cpu : usr=34.57%, sys=1.06%, ctx=931, majf=0, minf=9 00:25:10.572 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.2%, 16=16.1%, 32=0.0%, >=64=0.0% 00:25:10.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.572 complete : 0=0.0%, 4=87.8%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.572 issued rwts: total=2552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.572 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.572 filename0: (groupid=0, jobs=1): err= 0: pid=81903: Mon Nov 4 14:51:17 2024 00:25:10.572 read: IOPS=257, BW=1030KiB/s (1055kB/s)(10.1MiB/10023msec) 00:25:10.572 slat (usec): min=4, max=10027, avg=22.15, stdev=289.83 00:25:10.572 clat (msec): min=22, max=108, avg=61.96, stdev=17.47 00:25:10.572 lat (msec): min=22, max=108, avg=61.98, stdev=17.47 00:25:10.572 clat percentiles (msec): 00:25:10.572 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 48], 00:25:10.572 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 63], 00:25:10.572 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 85], 95.00th=[ 93], 00:25:10.572 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 108], 00:25:10.572 | 99.99th=[ 109] 00:25:10.572 bw ( KiB/s): min= 792, max= 1272, per=4.25%, avg=1029.30, stdev=165.33, samples=20 00:25:10.572 iops : min= 198, max= 318, avg=257.30, stdev=41.30, samples=20 00:25:10.572 lat (msec) : 50=30.33%, 100=69.44%, 250=0.23% 00:25:10.572 cpu : usr=32.28%, sys=0.95%, ctx=909, majf=0, minf=9 00:25:10.572 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.1%, 16=16.3%, 32=0.0%, >=64=0.0% 00:25:10.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.572 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.572 issued rwts: total=2582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.572 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.572 filename0: (groupid=0, jobs=1): err= 0: pid=81904: Mon Nov 4 14:51:17 2024 00:25:10.572 read: IOPS=256, BW=1025KiB/s (1049kB/s)(10.0MiB/10003msec) 00:25:10.572 slat (usec): min=3, max=8018, avg=25.15, stdev=285.75 00:25:10.572 clat (msec): min=5, max=122, avg=62.33, stdev=18.35 00:25:10.572 lat (msec): min=5, max=122, avg=62.35, stdev=18.35 00:25:10.572 clat percentiles (msec): 00:25:10.572 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 48], 00:25:10.572 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 65], 00:25:10.572 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 88], 95.00th=[ 94], 00:25:10.572 | 99.00th=[ 103], 99.50th=[ 104], 99.90th=[ 111], 99.95th=[ 124], 00:25:10.572 | 99.99th=[ 124] 00:25:10.572 bw ( KiB/s): min= 768, max= 1280, per=4.16%, avg=1007.16, stdev=175.31, samples=19 00:25:10.572 iops : min= 192, max= 320, avg=251.79, stdev=43.83, samples=19 00:25:10.572 lat (msec) : 10=0.12%, 20=0.51%, 50=27.35%, 100=70.70%, 250=1.33% 00:25:10.572 cpu : usr=43.24%, sys=0.99%, ctx=1253, majf=0, minf=9 00:25:10.572 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=79.9%, 16=15.6%, 32=0.0%, >=64=0.0% 00:25:10.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.572 complete : 0=0.0%, 4=87.9%, 8=11.3%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.572 issued rwts: total=2563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.572 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.572 filename0: (groupid=0, jobs=1): err= 0: pid=81905: Mon Nov 4 14:51:17 2024 00:25:10.572 read: IOPS=248, BW=995KiB/s (1019kB/s)(9972KiB/10021msec) 00:25:10.572 slat (usec): min=3, max=9023, avg=24.84, stdev=311.91 00:25:10.573 clat (msec): min=23, max=110, avg=64.20, stdev=17.82 00:25:10.573 lat (msec): min=23, max=110, avg=64.22, stdev=17.80 00:25:10.573 clat percentiles (msec): 00:25:10.573 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 00:25:10.573 | 30.00th=[ 53], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 72], 00:25:10.573 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 86], 95.00th=[ 95], 00:25:10.573 | 99.00th=[ 100], 99.50th=[ 102], 99.90th=[ 110], 99.95th=[ 110], 00:25:10.573 | 99.99th=[ 111] 00:25:10.573 bw ( KiB/s): min= 784, max= 1264, per=4.09%, avg=990.80, stdev=186.05, samples=20 00:25:10.573 iops : min= 196, max= 316, avg=247.70, stdev=46.51, samples=20 00:25:10.573 lat (msec) : 50=27.08%, 100=72.04%, 250=0.88% 00:25:10.573 cpu : usr=34.24%, sys=0.87%, ctx=1185, majf=0, minf=9 00:25:10.573 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=80.5%, 16=16.6%, 32=0.0%, >=64=0.0% 00:25:10.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.573 complete : 0=0.0%, 4=88.3%, 8=11.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.573 issued rwts: total=2493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.573 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.573 filename0: (groupid=0, jobs=1): err= 0: pid=81906: Mon Nov 4 14:51:17 2024 00:25:10.573 read: IOPS=247, BW=991KiB/s (1015kB/s)(9944KiB/10036msec) 00:25:10.573 slat (usec): min=3, max=8015, avg=16.77, stdev=196.71 00:25:10.573 clat (msec): min=8, max=119, avg=64.50, stdev=18.82 00:25:10.573 lat (msec): min=8, max=119, avg=64.52, stdev=18.82 00:25:10.573 clat percentiles (msec): 00:25:10.573 | 1.00th=[ 9], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 48], 00:25:10.573 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 71], 00:25:10.573 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 94], 00:25:10.573 | 99.00th=[ 103], 99.50th=[ 107], 99.90th=[ 116], 99.95th=[ 116], 00:25:10.573 | 99.99th=[ 121] 00:25:10.573 bw ( KiB/s): min= 768, max= 1536, per=4.08%, avg=988.00, stdev=196.88, samples=20 00:25:10.573 iops : min= 192, max= 384, avg=247.00, stdev=49.22, samples=20 00:25:10.573 lat (msec) : 10=1.21%, 20=0.72%, 50=23.97%, 100=73.09%, 250=1.01% 00:25:10.573 cpu : usr=40.05%, sys=1.20%, ctx=1047, majf=0, minf=9 00:25:10.573 IO depths : 1=0.1%, 2=1.3%, 4=5.1%, 8=77.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:25:10.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.573 complete : 0=0.0%, 4=89.2%, 8=9.7%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.573 issued rwts: total=2486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.573 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.573 filename1: (groupid=0, jobs=1): err= 0: pid=81907: Mon Nov 4 14:51:17 2024 00:25:10.573 read: IOPS=272, BW=1088KiB/s (1114kB/s)(10.6MiB/10001msec) 00:25:10.573 slat (usec): min=3, max=4018, avg=11.68, stdev=77.11 00:25:10.573 clat (usec): min=670, max=110054, avg=58756.58, stdev=22424.55 00:25:10.573 lat (usec): min=676, max=110061, avg=58768.26, stdev=22423.17 00:25:10.573 clat percentiles (usec): 00:25:10.573 | 1.00th=[ 996], 5.00th=[ 2212], 10.00th=[ 33817], 20.00th=[ 41681], 00:25:10.573 | 30.00th=[ 47973], 40.00th=[ 54789], 50.00th=[ 58983], 60.00th=[ 64226], 00:25:10.573 | 70.00th=[ 71828], 80.00th=[ 81265], 90.00th=[ 84411], 95.00th=[ 92799], 00:25:10.573 | 99.00th=[ 95945], 99.50th=[ 96994], 99.90th=[107480], 99.95th=[107480], 00:25:10.573 | 99.99th=[109577] 00:25:10.573 bw ( KiB/s): min= 840, max= 1280, per=4.16%, avg=1007.58, stdev=175.56, samples=19 00:25:10.573 iops : min= 210, max= 320, avg=251.89, stdev=43.89, samples=19 00:25:10.573 lat (usec) : 750=0.11%, 1000=0.92% 00:25:10.573 lat (msec) : 2=3.71%, 4=0.48%, 10=0.11%, 20=0.48%, 50=28.04% 00:25:10.573 lat (msec) : 100=65.93%, 250=0.22% 00:25:10.573 cpu : usr=36.64%, sys=1.02%, ctx=1388, majf=0, minf=9 00:25:10.573 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.6%, 16=16.4%, 32=0.0%, >=64=0.0% 00:25:10.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.573 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.573 issued rwts: total=2721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.573 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.573 filename1: (groupid=0, jobs=1): err= 0: pid=81908: Mon Nov 4 14:51:17 2024 00:25:10.573 read: IOPS=252, BW=1012KiB/s (1036kB/s)(9.91MiB/10025msec) 00:25:10.573 slat (usec): min=4, max=8015, avg=20.74, stdev=240.02 00:25:10.573 clat (msec): min=19, max=120, avg=63.10, stdev=18.63 00:25:10.573 lat (msec): min=19, max=120, avg=63.13, stdev=18.62 00:25:10.573 clat percentiles (msec): 00:25:10.573 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 47], 00:25:10.573 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 68], 00:25:10.573 | 70.00th=[ 77], 80.00th=[ 82], 90.00th=[ 90], 95.00th=[ 93], 00:25:10.573 | 99.00th=[ 105], 99.50th=[ 114], 99.90th=[ 117], 99.95th=[ 122], 00:25:10.573 | 99.99th=[ 122] 00:25:10.573 bw ( KiB/s): min= 776, max= 1280, per=4.17%, avg=1010.50, stdev=185.70, samples=20 00:25:10.573 iops : min= 194, max= 320, avg=252.60, stdev=46.40, samples=20 00:25:10.573 lat (msec) : 20=0.55%, 50=26.26%, 100=71.53%, 250=1.66% 00:25:10.573 cpu : usr=37.84%, sys=1.08%, ctx=1354, majf=0, minf=9 00:25:10.573 IO depths : 1=0.1%, 2=0.9%, 4=3.9%, 8=79.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:25:10.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.573 complete : 0=0.0%, 4=88.3%, 8=10.9%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.573 issued rwts: total=2536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.573 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.573 filename1: (groupid=0, jobs=1): err= 0: pid=81909: Mon Nov 4 14:51:17 2024 00:25:10.573 read: IOPS=248, BW=996KiB/s (1020kB/s)(9980KiB/10024msec) 00:25:10.573 slat (usec): min=3, max=12023, avg=26.52, stdev=335.52 00:25:10.573 clat (msec): min=23, max=120, avg=64.11, stdev=18.36 00:25:10.573 lat (msec): min=23, max=120, avg=64.14, stdev=18.36 00:25:10.573 clat percentiles (msec): 00:25:10.573 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 48], 00:25:10.573 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 71], 00:25:10.573 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 94], 00:25:10.573 | 99.00th=[ 100], 99.50th=[ 108], 99.90th=[ 118], 99.95th=[ 121], 00:25:10.573 | 99.99th=[ 121] 00:25:10.573 bw ( KiB/s): min= 784, max= 1304, per=4.09%, avg=991.60, stdev=202.79, samples=20 00:25:10.573 iops : min= 196, max= 326, avg=247.90, stdev=50.70, samples=20 00:25:10.573 lat (msec) : 50=25.13%, 100=73.99%, 250=0.88% 00:25:10.573 cpu : usr=40.91%, sys=1.35%, ctx=1072, majf=0, minf=9 00:25:10.573 IO depths : 1=0.1%, 2=1.2%, 4=5.1%, 8=78.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:25:10.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.573 complete : 0=0.0%, 4=88.6%, 8=10.3%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.573 issued rwts: total=2495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.573 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.573 filename1: (groupid=0, jobs=1): err= 0: pid=81910: Mon Nov 4 14:51:17 2024 00:25:10.573 read: IOPS=250, BW=1003KiB/s (1027kB/s)(9.83MiB/10036msec) 00:25:10.573 slat (usec): min=2, max=8025, avg=16.38, stdev=178.78 00:25:10.573 clat (msec): min=10, max=120, avg=63.72, stdev=17.13 00:25:10.573 lat (msec): min=10, max=120, avg=63.74, stdev=17.12 00:25:10.573 clat percentiles (msec): 00:25:10.573 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 48], 00:25:10.573 | 30.00th=[ 53], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 70], 00:25:10.573 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 86], 95.00th=[ 91], 00:25:10.573 | 99.00th=[ 96], 99.50th=[ 97], 99.90th=[ 120], 99.95th=[ 121], 00:25:10.573 | 99.99th=[ 121] 00:25:10.573 bw ( KiB/s): min= 824, max= 1392, per=4.13%, avg=1000.00, stdev=164.67, samples=20 00:25:10.573 iops : min= 206, max= 348, avg=250.00, stdev=41.17, samples=20 00:25:10.573 lat (msec) : 20=0.64%, 50=24.56%, 100=74.60%, 250=0.20% 00:25:10.573 cpu : usr=36.72%, sys=0.91%, ctx=1043, majf=0, minf=9 00:25:10.573 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=78.6%, 16=16.2%, 32=0.0%, >=64=0.0% 00:25:10.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.573 complete : 0=0.0%, 4=88.8%, 8=10.4%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.573 issued rwts: total=2516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.573 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.573 filename1: (groupid=0, jobs=1): err= 0: pid=81911: Mon Nov 4 14:51:17 2024 00:25:10.573 read: IOPS=254, BW=1020KiB/s (1044kB/s)(9.97MiB/10012msec) 00:25:10.573 slat (usec): min=2, max=8031, avg=17.65, stdev=194.49 00:25:10.573 clat (msec): min=24, max=140, avg=62.68, stdev=17.84 00:25:10.573 lat (msec): min=24, max=140, avg=62.70, stdev=17.84 00:25:10.573 clat percentiles (msec): 00:25:10.573 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 47], 00:25:10.573 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 67], 00:25:10.573 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 87], 95.00th=[ 92], 00:25:10.573 | 99.00th=[ 105], 99.50th=[ 110], 99.90th=[ 110], 99.95th=[ 142], 00:25:10.573 | 99.99th=[ 142] 00:25:10.573 bw ( KiB/s): min= 848, max= 1304, per=4.20%, avg=1016.80, stdev=166.44, samples=20 00:25:10.573 iops : min= 212, max= 326, avg=254.20, stdev=41.61, samples=20 00:25:10.573 lat (msec) : 50=26.65%, 100=71.83%, 250=1.53% 00:25:10.573 cpu : usr=39.03%, sys=1.19%, ctx=1376, majf=0, minf=9 00:25:10.573 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:25:10.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.573 complete : 0=0.0%, 4=87.9%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.573 issued rwts: total=2552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.573 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.573 filename1: (groupid=0, jobs=1): err= 0: pid=81913: Mon Nov 4 14:51:17 2024 00:25:10.573 read: IOPS=257, BW=1028KiB/s (1053kB/s)(10.0MiB/10003msec) 00:25:10.573 slat (usec): min=3, max=7014, avg=18.42, stdev=188.51 00:25:10.573 clat (msec): min=2, max=111, avg=62.17, stdev=18.61 00:25:10.573 lat (msec): min=3, max=111, avg=62.19, stdev=18.61 00:25:10.573 clat percentiles (msec): 00:25:10.573 | 1.00th=[ 23], 5.00th=[ 34], 10.00th=[ 39], 20.00th=[ 48], 00:25:10.573 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 68], 00:25:10.573 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 89], 95.00th=[ 94], 00:25:10.573 | 99.00th=[ 99], 99.50th=[ 102], 99.90th=[ 109], 99.95th=[ 112], 00:25:10.573 | 99.99th=[ 112] 00:25:10.573 bw ( KiB/s): min= 768, max= 1328, per=4.15%, avg=1005.95, stdev=187.49, samples=19 00:25:10.574 iops : min= 192, max= 332, avg=251.47, stdev=46.88, samples=19 00:25:10.574 lat (msec) : 4=0.23%, 10=0.16%, 20=0.47%, 50=26.76%, 100=71.88% 00:25:10.574 lat (msec) : 250=0.51% 00:25:10.574 cpu : usr=36.96%, sys=1.15%, ctx=1176, majf=0, minf=9 00:25:10.574 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=79.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:25:10.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.574 complete : 0=0.0%, 4=88.0%, 8=11.2%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.574 issued rwts: total=2571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.574 filename1: (groupid=0, jobs=1): err= 0: pid=81914: Mon Nov 4 14:51:17 2024 00:25:10.574 read: IOPS=254, BW=1018KiB/s (1043kB/s)(9.96MiB/10011msec) 00:25:10.574 slat (usec): min=3, max=8023, avg=22.97, stdev=250.78 00:25:10.574 clat (msec): min=13, max=112, avg=62.73, stdev=18.26 00:25:10.574 lat (msec): min=13, max=112, avg=62.76, stdev=18.26 00:25:10.574 clat percentiles (msec): 00:25:10.574 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 48], 00:25:10.574 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 68], 00:25:10.574 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 87], 95.00th=[ 94], 00:25:10.574 | 99.00th=[ 100], 99.50th=[ 100], 99.90th=[ 110], 99.95th=[ 112], 00:25:10.574 | 99.99th=[ 112] 00:25:10.574 bw ( KiB/s): min= 768, max= 1280, per=4.12%, avg=997.89, stdev=159.53, samples=19 00:25:10.574 iops : min= 192, max= 320, avg=249.47, stdev=39.88, samples=19 00:25:10.574 lat (msec) : 20=0.51%, 50=28.17%, 100=70.85%, 250=0.47% 00:25:10.574 cpu : usr=37.10%, sys=1.22%, ctx=1341, majf=0, minf=9 00:25:10.574 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=80.7%, 16=15.8%, 32=0.0%, >=64=0.0% 00:25:10.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.574 complete : 0=0.0%, 4=87.8%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.574 issued rwts: total=2549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.574 filename1: (groupid=0, jobs=1): err= 0: pid=81915: Mon Nov 4 14:51:17 2024 00:25:10.574 read: IOPS=255, BW=1022KiB/s (1047kB/s)(9.99MiB/10007msec) 00:25:10.574 slat (usec): min=3, max=9020, avg=17.86, stdev=238.38 00:25:10.574 clat (msec): min=12, max=108, avg=62.50, stdev=17.86 00:25:10.574 lat (msec): min=12, max=108, avg=62.52, stdev=17.86 00:25:10.574 clat percentiles (msec): 00:25:10.574 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 48], 00:25:10.574 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 69], 00:25:10.574 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 85], 95.00th=[ 94], 00:25:10.574 | 99.00th=[ 96], 99.50th=[ 97], 99.90th=[ 108], 99.95th=[ 109], 00:25:10.574 | 99.99th=[ 109] 00:25:10.574 bw ( KiB/s): min= 832, max= 1280, per=4.15%, avg=1006.37, stdev=173.00, samples=19 00:25:10.574 iops : min= 208, max= 320, avg=251.58, stdev=43.26, samples=19 00:25:10.574 lat (msec) : 20=0.51%, 50=30.73%, 100=68.37%, 250=0.39% 00:25:10.574 cpu : usr=32.36%, sys=0.90%, ctx=912, majf=0, minf=9 00:25:10.574 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=79.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:25:10.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.574 complete : 0=0.0%, 4=88.1%, 8=11.1%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.574 issued rwts: total=2558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.574 filename2: (groupid=0, jobs=1): err= 0: pid=81916: Mon Nov 4 14:51:17 2024 00:25:10.574 read: IOPS=249, BW=997KiB/s (1021kB/s)(9.77MiB/10031msec) 00:25:10.574 slat (usec): min=4, max=245, avg=11.28, stdev= 8.73 00:25:10.574 clat (msec): min=14, max=113, avg=64.09, stdev=16.90 00:25:10.574 lat (msec): min=14, max=113, avg=64.10, stdev=16.90 00:25:10.574 clat percentiles (msec): 00:25:10.574 | 1.00th=[ 18], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 50], 00:25:10.574 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 68], 00:25:10.574 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 87], 95.00th=[ 92], 00:25:10.574 | 99.00th=[ 97], 99.50th=[ 100], 99.90th=[ 108], 99.95th=[ 112], 00:25:10.574 | 99.99th=[ 114] 00:25:10.574 bw ( KiB/s): min= 840, max= 1200, per=4.10%, avg=993.60, stdev=128.33, samples=20 00:25:10.574 iops : min= 210, max= 300, avg=248.40, stdev=32.08, samples=20 00:25:10.574 lat (msec) : 20=1.20%, 50=20.36%, 100=78.08%, 250=0.36% 00:25:10.574 cpu : usr=41.22%, sys=1.30%, ctx=1223, majf=0, minf=9 00:25:10.574 IO depths : 1=0.2%, 2=0.6%, 4=1.9%, 8=80.8%, 16=16.4%, 32=0.0%, >=64=0.0% 00:25:10.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.574 complete : 0=0.0%, 4=88.1%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.574 issued rwts: total=2500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.574 filename2: (groupid=0, jobs=1): err= 0: pid=81917: Mon Nov 4 14:51:17 2024 00:25:10.574 read: IOPS=247, BW=991KiB/s (1015kB/s)(9920KiB/10011msec) 00:25:10.574 slat (usec): min=3, max=4034, avg=19.44, stdev=172.92 00:25:10.574 clat (msec): min=13, max=127, avg=64.50, stdev=19.99 00:25:10.574 lat (msec): min=13, max=127, avg=64.52, stdev=19.99 00:25:10.574 clat percentiles (msec): 00:25:10.574 | 1.00th=[ 28], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 48], 00:25:10.574 | 30.00th=[ 53], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 72], 00:25:10.574 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 96], 00:25:10.574 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 128], 00:25:10.574 | 99.99th=[ 128] 00:25:10.574 bw ( KiB/s): min= 656, max= 1272, per=4.00%, avg=968.42, stdev=209.51, samples=19 00:25:10.574 iops : min= 164, max= 318, avg=242.11, stdev=52.38, samples=19 00:25:10.574 lat (msec) : 20=0.52%, 50=24.88%, 100=72.10%, 250=2.50% 00:25:10.574 cpu : usr=42.75%, sys=1.25%, ctx=1164, majf=0, minf=9 00:25:10.574 IO depths : 1=0.1%, 2=1.2%, 4=5.3%, 8=77.9%, 16=15.5%, 32=0.0%, >=64=0.0% 00:25:10.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.574 complete : 0=0.0%, 4=88.7%, 8=10.2%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.574 issued rwts: total=2480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.574 filename2: (groupid=0, jobs=1): err= 0: pid=81918: Mon Nov 4 14:51:17 2024 00:25:10.574 read: IOPS=237, BW=952KiB/s (975kB/s)(9548KiB/10031msec) 00:25:10.574 slat (usec): min=4, max=8017, avg=15.65, stdev=183.44 00:25:10.574 clat (msec): min=10, max=120, avg=67.11, stdev=20.58 00:25:10.574 lat (msec): min=10, max=120, avg=67.13, stdev=20.59 00:25:10.574 clat percentiles (msec): 00:25:10.574 | 1.00th=[ 17], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 50], 00:25:10.574 | 30.00th=[ 54], 40.00th=[ 60], 50.00th=[ 63], 60.00th=[ 73], 00:25:10.574 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 100], 00:25:10.574 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:25:10.574 | 99.99th=[ 121] 00:25:10.574 bw ( KiB/s): min= 544, max= 1277, per=3.91%, avg=948.25, stdev=212.18, samples=20 00:25:10.574 iops : min= 136, max= 319, avg=237.05, stdev=53.03, samples=20 00:25:10.574 lat (msec) : 20=1.26%, 50=23.00%, 100=71.30%, 250=4.44% 00:25:10.574 cpu : usr=37.12%, sys=0.96%, ctx=1072, majf=0, minf=9 00:25:10.574 IO depths : 1=0.1%, 2=1.6%, 4=6.6%, 8=75.7%, 16=16.0%, 32=0.0%, >=64=0.0% 00:25:10.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.574 complete : 0=0.0%, 4=89.5%, 8=9.0%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.574 issued rwts: total=2387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.574 filename2: (groupid=0, jobs=1): err= 0: pid=81919: Mon Nov 4 14:51:17 2024 00:25:10.574 read: IOPS=261, BW=1044KiB/s (1069kB/s)(10.2MiB/10047msec) 00:25:10.574 slat (usec): min=3, max=8024, avg=20.90, stdev=292.49 00:25:10.574 clat (msec): min=2, max=119, avg=61.18, stdev=20.82 00:25:10.574 lat (msec): min=2, max=119, avg=61.20, stdev=20.82 00:25:10.574 clat percentiles (msec): 00:25:10.574 | 1.00th=[ 3], 5.00th=[ 31], 10.00th=[ 38], 20.00th=[ 47], 00:25:10.574 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 67], 00:25:10.574 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 86], 95.00th=[ 93], 00:25:10.574 | 99.00th=[ 103], 99.50th=[ 108], 99.90th=[ 108], 99.95th=[ 109], 00:25:10.574 | 99.99th=[ 121] 00:25:10.574 bw ( KiB/s): min= 784, max= 2016, per=4.30%, avg=1042.80, stdev=287.25, samples=20 00:25:10.574 iops : min= 196, max= 504, avg=260.70, stdev=71.81, samples=20 00:25:10.574 lat (msec) : 4=1.75%, 10=1.75%, 20=0.69%, 50=23.18%, 100=71.10% 00:25:10.574 lat (msec) : 250=1.52% 00:25:10.574 cpu : usr=39.77%, sys=1.33%, ctx=1193, majf=0, minf=9 00:25:10.574 IO depths : 1=0.1%, 2=1.1%, 4=4.1%, 8=78.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:25:10.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.574 complete : 0=0.0%, 4=88.5%, 8=10.6%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.574 issued rwts: total=2623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.574 filename2: (groupid=0, jobs=1): err= 0: pid=81920: Mon Nov 4 14:51:17 2024 00:25:10.574 read: IOPS=254, BW=1017KiB/s (1041kB/s)(9.94MiB/10009msec) 00:25:10.574 slat (usec): min=5, max=1026, avg=12.54, stdev=21.60 00:25:10.574 clat (msec): min=12, max=114, avg=62.86, stdev=17.03 00:25:10.574 lat (msec): min=12, max=114, avg=62.87, stdev=17.03 00:25:10.574 clat percentiles (msec): 00:25:10.574 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:25:10.574 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 64], 00:25:10.574 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 86], 95.00th=[ 93], 00:25:10.574 | 99.00th=[ 97], 99.50th=[ 100], 99.90th=[ 108], 99.95th=[ 114], 00:25:10.574 | 99.99th=[ 114] 00:25:10.574 bw ( KiB/s): min= 816, max= 1280, per=4.12%, avg=997.89, stdev=145.37, samples=19 00:25:10.574 iops : min= 204, max= 320, avg=249.47, stdev=36.34, samples=19 00:25:10.574 lat (msec) : 20=0.47%, 50=27.07%, 100=72.14%, 250=0.31% 00:25:10.574 cpu : usr=38.12%, sys=1.03%, ctx=1099, majf=0, minf=9 00:25:10.574 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.3%, 16=16.2%, 32=0.0%, >=64=0.0% 00:25:10.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.574 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.575 issued rwts: total=2545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.575 filename2: (groupid=0, jobs=1): err= 0: pid=81921: Mon Nov 4 14:51:17 2024 00:25:10.575 read: IOPS=247, BW=988KiB/s (1012kB/s)(9916KiB/10035msec) 00:25:10.575 slat (usec): min=4, max=8013, avg=13.03, stdev=160.88 00:25:10.575 clat (msec): min=13, max=110, avg=64.68, stdev=17.09 00:25:10.575 lat (msec): min=13, max=110, avg=64.69, stdev=17.10 00:25:10.575 clat percentiles (msec): 00:25:10.575 | 1.00th=[ 18], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 48], 00:25:10.575 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 61], 60.00th=[ 72], 00:25:10.575 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 85], 95.00th=[ 94], 00:25:10.575 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 110], 00:25:10.575 | 99.99th=[ 111] 00:25:10.575 bw ( KiB/s): min= 816, max= 1272, per=4.07%, avg=985.20, stdev=154.09, samples=20 00:25:10.575 iops : min= 204, max= 318, avg=246.30, stdev=38.52, samples=20 00:25:10.575 lat (msec) : 20=1.13%, 50=23.32%, 100=75.23%, 250=0.32% 00:25:10.575 cpu : usr=32.18%, sys=1.06%, ctx=922, majf=0, minf=9 00:25:10.575 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.0%, 16=16.9%, 32=0.0%, >=64=0.0% 00:25:10.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.575 complete : 0=0.0%, 4=88.3%, 8=11.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.575 issued rwts: total=2479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.575 filename2: (groupid=0, jobs=1): err= 0: pid=81922: Mon Nov 4 14:51:17 2024 00:25:10.575 read: IOPS=249, BW=997KiB/s (1021kB/s)(9.77MiB/10030msec) 00:25:10.575 slat (usec): min=4, max=9027, avg=18.41, stdev=254.17 00:25:10.575 clat (msec): min=24, max=114, avg=64.11, stdev=18.74 00:25:10.575 lat (msec): min=24, max=114, avg=64.12, stdev=18.74 00:25:10.575 clat percentiles (msec): 00:25:10.575 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 48], 00:25:10.575 | 30.00th=[ 53], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 71], 00:25:10.575 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 91], 95.00th=[ 96], 00:25:10.575 | 99.00th=[ 101], 99.50th=[ 108], 99.90th=[ 111], 99.95th=[ 111], 00:25:10.575 | 99.99th=[ 115] 00:25:10.575 bw ( KiB/s): min= 760, max= 1272, per=4.10%, avg=993.70, stdev=199.34, samples=20 00:25:10.575 iops : min= 190, max= 318, avg=248.40, stdev=49.80, samples=20 00:25:10.575 lat (msec) : 50=25.12%, 100=74.00%, 250=0.88% 00:25:10.575 cpu : usr=38.76%, sys=1.26%, ctx=1188, majf=0, minf=9 00:25:10.575 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=78.9%, 16=16.0%, 32=0.0%, >=64=0.0% 00:25:10.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.575 complete : 0=0.0%, 4=88.5%, 8=10.6%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.575 issued rwts: total=2500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.575 filename2: (groupid=0, jobs=1): err= 0: pid=81923: Mon Nov 4 14:51:17 2024 00:25:10.575 read: IOPS=252, BW=1012KiB/s (1036kB/s)(9.90MiB/10018msec) 00:25:10.575 slat (usec): min=3, max=4048, avg=16.44, stdev=130.13 00:25:10.575 clat (msec): min=25, max=112, avg=63.15, stdev=16.73 00:25:10.575 lat (msec): min=25, max=112, avg=63.17, stdev=16.73 00:25:10.575 clat percentiles (msec): 00:25:10.575 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 42], 20.00th=[ 48], 00:25:10.575 | 30.00th=[ 54], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 66], 00:25:10.575 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 86], 95.00th=[ 91], 00:25:10.575 | 99.00th=[ 96], 99.50th=[ 102], 99.90th=[ 108], 99.95th=[ 108], 00:25:10.575 | 99.99th=[ 112] 00:25:10.575 bw ( KiB/s): min= 816, max= 1248, per=4.16%, avg=1007.20, stdev=155.69, samples=20 00:25:10.575 iops : min= 204, max= 312, avg=251.80, stdev=38.92, samples=20 00:25:10.575 lat (msec) : 50=25.10%, 100=74.31%, 250=0.59% 00:25:10.575 cpu : usr=40.91%, sys=1.29%, ctx=1149, majf=0, minf=9 00:25:10.575 IO depths : 1=0.1%, 2=0.6%, 4=2.0%, 8=81.1%, 16=16.3%, 32=0.0%, >=64=0.0% 00:25:10.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.575 complete : 0=0.0%, 4=88.0%, 8=11.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.575 issued rwts: total=2534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:10.575 00:25:10.575 Run status group 0 (all jobs): 00:25:10.575 READ: bw=23.6MiB/s (24.8MB/s), 952KiB/s-1088KiB/s (975kB/s-1114kB/s), io=238MiB (249MB), run=10001-10049msec 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:10.575 bdev_null0 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:10.575 [2024-11-04 14:51:18.166258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:10.575 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:10.576 bdev_null1 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:10.576 { 00:25:10.576 "params": { 00:25:10.576 "name": "Nvme$subsystem", 00:25:10.576 "trtype": "$TEST_TRANSPORT", 00:25:10.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.576 "adrfam": "ipv4", 00:25:10.576 "trsvcid": "$NVMF_PORT", 00:25:10.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.576 "hdgst": ${hdgst:-false}, 00:25:10.576 "ddgst": ${ddgst:-false} 00:25:10.576 }, 00:25:10.576 "method": "bdev_nvme_attach_controller" 00:25:10.576 } 00:25:10.576 EOF 00:25:10.576 )") 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:10.576 { 00:25:10.576 "params": { 00:25:10.576 "name": "Nvme$subsystem", 00:25:10.576 "trtype": "$TEST_TRANSPORT", 00:25:10.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.576 "adrfam": "ipv4", 00:25:10.576 "trsvcid": "$NVMF_PORT", 00:25:10.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.576 "hdgst": ${hdgst:-false}, 00:25:10.576 "ddgst": ${ddgst:-false} 00:25:10.576 }, 00:25:10.576 "method": "bdev_nvme_attach_controller" 00:25:10.576 } 00:25:10.576 EOF 00:25:10.576 )") 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:10.576 "params": { 00:25:10.576 "name": "Nvme0", 00:25:10.576 "trtype": "tcp", 00:25:10.576 "traddr": "10.0.0.3", 00:25:10.576 "adrfam": "ipv4", 00:25:10.576 "trsvcid": "4420", 00:25:10.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:10.576 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:10.576 "hdgst": false, 00:25:10.576 "ddgst": false 00:25:10.576 }, 00:25:10.576 "method": "bdev_nvme_attach_controller" 00:25:10.576 },{ 00:25:10.576 "params": { 00:25:10.576 "name": "Nvme1", 00:25:10.576 "trtype": "tcp", 00:25:10.576 "traddr": "10.0.0.3", 00:25:10.576 "adrfam": "ipv4", 00:25:10.576 "trsvcid": "4420", 00:25:10.576 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:10.576 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:10.576 "hdgst": false, 00:25:10.576 "ddgst": false 00:25:10.576 }, 00:25:10.576 "method": "bdev_nvme_attach_controller" 00:25:10.576 }' 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:10.576 14:51:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:10.576 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:10.576 ... 00:25:10.576 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:10.576 ... 00:25:10.576 fio-3.35 00:25:10.576 Starting 4 threads 00:25:15.834 00:25:15.834 filename0: (groupid=0, jobs=1): err= 0: pid=82080: Mon Nov 4 14:51:23 2024 00:25:15.834 read: IOPS=2760, BW=21.6MiB/s (22.6MB/s)(108MiB/5001msec) 00:25:15.834 slat (nsec): min=5572, max=40931, avg=7882.77, stdev=4037.87 00:25:15.834 clat (usec): min=754, max=5843, avg=2875.25, stdev=745.80 00:25:15.834 lat (usec): min=761, max=5850, avg=2883.13, stdev=745.99 00:25:15.834 clat percentiles (usec): 00:25:15.834 | 1.00th=[ 1254], 5.00th=[ 1614], 10.00th=[ 1696], 20.00th=[ 1991], 00:25:15.834 | 30.00th=[ 2343], 40.00th=[ 2868], 50.00th=[ 3163], 60.00th=[ 3326], 00:25:15.834 | 70.00th=[ 3392], 80.00th=[ 3490], 90.00th=[ 3589], 95.00th=[ 3752], 00:25:15.834 | 99.00th=[ 4228], 99.50th=[ 4293], 99.90th=[ 4621], 99.95th=[ 5014], 00:25:15.834 | 99.99th=[ 5800] 00:25:15.834 bw ( KiB/s): min=18800, max=25152, per=25.28%, avg=21939.56, stdev=1954.31, samples=9 00:25:15.834 iops : min= 2350, max= 3144, avg=2742.44, stdev=244.29, samples=9 00:25:15.834 lat (usec) : 1000=0.36% 00:25:15.834 lat (msec) : 2=20.17%, 4=76.97%, 10=2.50% 00:25:15.834 cpu : usr=93.42%, sys=6.00%, ctx=7, majf=0, minf=9 00:25:15.834 IO depths : 1=0.1%, 2=8.2%, 4=60.5%, 8=31.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:15.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.834 complete : 0=0.0%, 4=96.9%, 8=3.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.834 issued rwts: total=13806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:15.834 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:15.834 filename0: (groupid=0, jobs=1): err= 0: pid=82081: Mon Nov 4 14:51:23 2024 00:25:15.834 read: IOPS=2805, BW=21.9MiB/s (23.0MB/s)(110MiB/5001msec) 00:25:15.834 slat (nsec): min=5566, max=98544, avg=9038.39, stdev=5141.07 00:25:15.834 clat (usec): min=284, max=5871, avg=2826.31, stdev=786.47 00:25:15.834 lat (usec): min=292, max=5877, avg=2835.35, stdev=786.45 00:25:15.834 clat percentiles (usec): 00:25:15.834 | 1.00th=[ 1237], 5.00th=[ 1565], 10.00th=[ 1647], 20.00th=[ 1958], 00:25:15.834 | 30.00th=[ 2245], 40.00th=[ 2737], 50.00th=[ 3097], 60.00th=[ 3261], 00:25:15.834 | 70.00th=[ 3392], 80.00th=[ 3490], 90.00th=[ 3621], 95.00th=[ 3851], 00:25:15.834 | 99.00th=[ 4293], 99.50th=[ 4359], 99.90th=[ 4621], 99.95th=[ 4686], 00:25:15.834 | 99.99th=[ 5735] 00:25:15.834 bw ( KiB/s): min=19520, max=25760, per=25.86%, avg=22445.33, stdev=2229.38, samples=9 00:25:15.834 iops : min= 2440, max= 3220, avg=2805.67, stdev=278.67, samples=9 00:25:15.834 lat (usec) : 500=0.02%, 750=0.09%, 1000=0.41% 00:25:15.834 lat (msec) : 2=21.94%, 4=73.97%, 10=3.56% 00:25:15.834 cpu : usr=93.40%, sys=5.74%, ctx=72, majf=0, minf=9 00:25:15.834 IO depths : 1=0.1%, 2=6.7%, 4=61.1%, 8=32.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:15.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.834 complete : 0=0.0%, 4=97.5%, 8=2.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.834 issued rwts: total=14028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:15.834 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:15.834 filename1: (groupid=0, jobs=1): err= 0: pid=82082: Mon Nov 4 14:51:23 2024 00:25:15.834 read: IOPS=2610, BW=20.4MiB/s (21.4MB/s)(102MiB/5002msec) 00:25:15.834 slat (nsec): min=4344, max=48951, avg=11189.40, stdev=6344.83 00:25:15.834 clat (usec): min=739, max=5544, avg=3028.15, stdev=708.60 00:25:15.834 lat (usec): min=747, max=5554, avg=3039.34, stdev=708.57 00:25:15.834 clat percentiles (usec): 00:25:15.834 | 1.00th=[ 1270], 5.00th=[ 1680], 10.00th=[ 1860], 20.00th=[ 2245], 00:25:15.834 | 30.00th=[ 2835], 40.00th=[ 3130], 50.00th=[ 3294], 60.00th=[ 3359], 00:25:15.834 | 70.00th=[ 3458], 80.00th=[ 3523], 90.00th=[ 3621], 95.00th=[ 3916], 00:25:15.834 | 99.00th=[ 4293], 99.50th=[ 4621], 99.90th=[ 5014], 99.95th=[ 5080], 00:25:15.834 | 99.99th=[ 5538] 00:25:15.834 bw ( KiB/s): min=17920, max=22528, per=23.81%, avg=20661.33, stdev=1756.43, samples=9 00:25:15.834 iops : min= 2240, max= 2816, avg=2582.67, stdev=219.55, samples=9 00:25:15.835 lat (usec) : 750=0.02%, 1000=0.32% 00:25:15.835 lat (msec) : 2=13.62%, 4=81.95%, 10=4.09% 00:25:15.835 cpu : usr=94.08%, sys=5.16%, ctx=108, majf=0, minf=9 00:25:15.835 IO depths : 1=0.1%, 2=12.0%, 4=58.3%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:15.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.835 complete : 0=0.0%, 4=95.4%, 8=4.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.835 issued rwts: total=13060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:15.835 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:15.835 filename1: (groupid=0, jobs=1): err= 0: pid=82083: Mon Nov 4 14:51:23 2024 00:25:15.835 read: IOPS=2673, BW=20.9MiB/s (21.9MB/s)(104MiB/5003msec) 00:25:15.835 slat (nsec): min=3780, max=46360, avg=10277.00, stdev=5616.04 00:25:15.835 clat (usec): min=520, max=8138, avg=2962.52, stdev=833.86 00:25:15.835 lat (usec): min=526, max=8155, avg=2972.79, stdev=834.19 00:25:15.835 clat percentiles (usec): 00:25:15.835 | 1.00th=[ 1123], 5.00th=[ 1467], 10.00th=[ 1631], 20.00th=[ 2073], 00:25:15.835 | 30.00th=[ 2638], 40.00th=[ 3064], 50.00th=[ 3261], 60.00th=[ 3392], 00:25:15.835 | 70.00th=[ 3490], 80.00th=[ 3556], 90.00th=[ 3720], 95.00th=[ 4047], 00:25:15.835 | 99.00th=[ 4621], 99.50th=[ 4752], 99.90th=[ 5276], 99.95th=[ 8094], 00:25:15.835 | 99.99th=[ 8160] 00:25:15.835 bw ( KiB/s): min=18048, max=25776, per=24.93%, avg=21630.11, stdev=2203.20, samples=9 00:25:15.835 iops : min= 2256, max= 3222, avg=2703.67, stdev=275.41, samples=9 00:25:15.835 lat (usec) : 750=0.04%, 1000=0.28% 00:25:15.835 lat (msec) : 2=17.20%, 4=76.82%, 10=5.64% 00:25:15.835 cpu : usr=93.92%, sys=5.40%, ctx=74, majf=0, minf=0 00:25:15.835 IO depths : 1=0.1%, 2=8.9%, 4=59.3%, 8=31.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:15.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.835 complete : 0=0.0%, 4=96.6%, 8=3.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.835 issued rwts: total=13375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:15.835 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:15.835 00:25:15.835 Run status group 0 (all jobs): 00:25:15.835 READ: bw=84.7MiB/s (88.9MB/s), 20.4MiB/s-21.9MiB/s (21.4MB/s-23.0MB/s), io=424MiB (445MB), run=5001-5003msec 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:15.835 ************************************ 00:25:15.835 END TEST fio_dif_rand_params 00:25:15.835 ************************************ 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.835 00:25:15.835 real 0m22.895s 00:25:15.835 user 2m6.699s 00:25:15.835 sys 0m5.692s 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:15.835 14:51:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:15.835 14:51:24 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:25:15.835 14:51:24 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:15.835 14:51:24 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:15.835 14:51:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:15.835 ************************************ 00:25:15.835 START TEST fio_dif_digest 00:25:15.835 ************************************ 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:15.835 bdev_null0 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:15.835 [2024-11-04 14:51:24.145013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:15.835 { 00:25:15.835 "params": { 00:25:15.835 "name": "Nvme$subsystem", 00:25:15.835 "trtype": "$TEST_TRANSPORT", 00:25:15.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.835 "adrfam": "ipv4", 00:25:15.835 "trsvcid": "$NVMF_PORT", 00:25:15.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.835 "hdgst": ${hdgst:-false}, 00:25:15.835 "ddgst": ${ddgst:-false} 00:25:15.835 }, 00:25:15.835 "method": "bdev_nvme_attach_controller" 00:25:15.835 } 00:25:15.835 EOF 00:25:15.835 )") 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:25:15.835 14:51:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:25:15.836 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:25:15.836 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:15.836 14:51:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:25:15.836 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:25:15.836 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:15.836 14:51:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:25:15.836 14:51:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:25:15.836 14:51:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:15.836 "params": { 00:25:15.836 "name": "Nvme0", 00:25:15.836 "trtype": "tcp", 00:25:15.836 "traddr": "10.0.0.3", 00:25:15.836 "adrfam": "ipv4", 00:25:15.836 "trsvcid": "4420", 00:25:15.836 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:15.836 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:15.836 "hdgst": true, 00:25:15.836 "ddgst": true 00:25:15.836 }, 00:25:15.836 "method": "bdev_nvme_attach_controller" 00:25:15.836 }' 00:25:15.836 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:15.836 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:15.836 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:15.836 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:25:15.836 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:15.836 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:15.836 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:15.836 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:15.836 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:15.836 14:51:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:15.836 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:15.836 ... 00:25:15.836 fio-3.35 00:25:15.836 Starting 3 threads 00:25:25.825 00:25:25.826 filename0: (groupid=0, jobs=1): err= 0: pid=82190: Mon Nov 4 14:51:34 2024 00:25:25.826 read: IOPS=306, BW=38.4MiB/s (40.2MB/s)(384MiB/10008msec) 00:25:25.826 slat (nsec): min=5767, max=21851, avg=7474.35, stdev=1529.29 00:25:25.826 clat (usec): min=7470, max=10442, avg=9755.97, stdev=203.54 00:25:25.826 lat (usec): min=7477, max=10452, avg=9763.45, stdev=203.68 00:25:25.826 clat percentiles (usec): 00:25:25.826 | 1.00th=[ 9372], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9634], 00:25:25.826 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9765], 60.00th=[ 9765], 00:25:25.826 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10028], 95.00th=[10028], 00:25:25.826 | 99.00th=[10028], 99.50th=[10028], 99.90th=[10421], 99.95th=[10421], 00:25:25.826 | 99.99th=[10421] 00:25:25.826 bw ( KiB/s): min=38323, max=40704, per=33.35%, avg=39281.00, stdev=691.72, samples=19 00:25:25.826 iops : min= 299, max= 318, avg=306.84, stdev= 5.42, samples=19 00:25:25.826 lat (msec) : 10=78.48%, 20=21.52% 00:25:25.826 cpu : usr=92.56%, sys=7.02%, ctx=14, majf=0, minf=0 00:25:25.826 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:25.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.826 issued rwts: total=3072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.826 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:25.826 filename0: (groupid=0, jobs=1): err= 0: pid=82191: Mon Nov 4 14:51:34 2024 00:25:25.826 read: IOPS=306, BW=38.4MiB/s (40.2MB/s)(384MiB/10001msec) 00:25:25.826 slat (usec): min=5, max=101, avg=10.17, stdev= 5.24 00:25:25.826 clat (usec): min=7705, max=10462, avg=9752.76, stdev=198.08 00:25:25.826 lat (usec): min=7728, max=10495, avg=9762.93, stdev=197.84 00:25:25.826 clat percentiles (usec): 00:25:25.826 | 1.00th=[ 9372], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9634], 00:25:25.826 | 30.00th=[ 9634], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9765], 00:25:25.826 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10028], 95.00th=[10028], 00:25:25.826 | 99.00th=[10028], 99.50th=[10159], 99.90th=[10421], 99.95th=[10421], 00:25:25.826 | 99.99th=[10421] 00:25:25.826 bw ( KiB/s): min=37632, max=39936, per=33.32%, avg=39248.84, stdev=719.30, samples=19 00:25:25.826 iops : min= 294, max= 312, avg=306.63, stdev= 5.62, samples=19 00:25:25.826 lat (msec) : 10=79.64%, 20=20.36% 00:25:25.826 cpu : usr=92.15%, sys=7.21%, ctx=118, majf=0, minf=0 00:25:25.826 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:25.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.826 issued rwts: total=3069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.826 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:25.826 filename0: (groupid=0, jobs=1): err= 0: pid=82192: Mon Nov 4 14:51:34 2024 00:25:25.826 read: IOPS=306, BW=38.4MiB/s (40.2MB/s)(384MiB/10001msec) 00:25:25.826 slat (nsec): min=5778, max=33483, avg=9141.74, stdev=4693.84 00:25:25.826 clat (usec): min=7703, max=10458, avg=9755.96, stdev=199.21 00:25:25.826 lat (usec): min=7727, max=10491, avg=9765.11, stdev=198.61 00:25:25.826 clat percentiles (usec): 00:25:25.826 | 1.00th=[ 9372], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9634], 00:25:25.826 | 30.00th=[ 9634], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9765], 00:25:25.826 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10028], 95.00th=[10028], 00:25:25.826 | 99.00th=[10028], 99.50th=[10159], 99.90th=[10421], 99.95th=[10421], 00:25:25.826 | 99.99th=[10421] 00:25:25.826 bw ( KiB/s): min=37632, max=39936, per=33.32%, avg=39248.84, stdev=719.30, samples=19 00:25:25.826 iops : min= 294, max= 312, avg=306.63, stdev= 5.62, samples=19 00:25:25.826 lat (msec) : 10=78.17%, 20=21.83% 00:25:25.826 cpu : usr=92.29%, sys=7.30%, ctx=7, majf=0, minf=0 00:25:25.826 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:25.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.826 issued rwts: total=3069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.826 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:25.826 00:25:25.826 Run status group 0 (all jobs): 00:25:25.826 READ: bw=115MiB/s (121MB/s), 38.4MiB/s-38.4MiB/s (40.2MB/s-40.2MB/s), io=1151MiB (1207MB), run=10001-10008msec 00:25:25.826 14:51:34 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:25:25.826 14:51:34 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:25:25.826 14:51:34 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:25:25.826 14:51:34 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:25.826 14:51:34 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:25:25.826 14:51:34 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:25.826 14:51:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.826 14:51:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:25.826 14:51:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.826 14:51:34 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:25.826 14:51:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.826 14:51:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:25.826 ************************************ 00:25:25.826 END TEST fio_dif_digest 00:25:25.826 ************************************ 00:25:25.826 14:51:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.826 00:25:25.826 real 0m10.824s 00:25:25.826 user 0m28.211s 00:25:25.826 sys 0m2.338s 00:25:25.826 14:51:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:25.826 14:51:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:26.086 14:51:34 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:25:26.086 14:51:34 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:25:26.086 14:51:34 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:26.086 14:51:34 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:25:26.086 14:51:35 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:26.086 14:51:35 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:25:26.086 14:51:35 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:26.086 14:51:35 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:26.086 rmmod nvme_tcp 00:25:26.086 rmmod nvme_fabrics 00:25:26.086 rmmod nvme_keyring 00:25:26.086 14:51:35 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:26.086 14:51:35 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:25:26.086 14:51:35 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:25:26.086 14:51:35 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 81412 ']' 00:25:26.086 14:51:35 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 81412 00:25:26.086 14:51:35 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 81412 ']' 00:25:26.086 14:51:35 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 81412 00:25:26.086 14:51:35 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:25:26.086 14:51:35 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:26.086 14:51:35 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81412 00:25:26.086 killing process with pid 81412 00:25:26.086 14:51:35 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:26.086 14:51:35 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:26.086 14:51:35 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81412' 00:25:26.086 14:51:35 nvmf_dif -- common/autotest_common.sh@971 -- # kill 81412 00:25:26.086 14:51:35 nvmf_dif -- common/autotest_common.sh@976 -- # wait 81412 00:25:26.086 14:51:35 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:25:26.086 14:51:35 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:26.357 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:26.357 Waiting for block devices as requested 00:25:26.615 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:26.615 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:26.615 14:51:35 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:26.615 14:51:35 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:26.615 14:51:35 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:25:26.615 14:51:35 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:25:26.615 14:51:35 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:26.615 14:51:35 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:25:26.615 14:51:35 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:26.615 14:51:35 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:26.615 14:51:35 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:26.615 14:51:35 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:26.615 14:51:35 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:26.615 14:51:35 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:26.615 14:51:35 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:26.615 14:51:35 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:26.615 14:51:35 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:26.615 14:51:35 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:26.615 14:51:35 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:26.874 14:51:35 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:26.874 14:51:35 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:26.874 14:51:35 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:26.874 14:51:35 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:26.874 14:51:35 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:26.874 14:51:35 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.874 14:51:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:26.874 14:51:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.874 14:51:35 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:25:26.874 ************************************ 00:25:26.874 END TEST nvmf_dif 00:25:26.874 ************************************ 00:25:26.874 00:25:26.874 real 0m58.317s 00:25:26.874 user 3m51.130s 00:25:26.874 sys 0m14.490s 00:25:26.874 14:51:35 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:26.874 14:51:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:26.874 14:51:35 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:26.874 14:51:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:26.874 14:51:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:26.874 14:51:35 -- common/autotest_common.sh@10 -- # set +x 00:25:26.874 ************************************ 00:25:26.874 START TEST nvmf_abort_qd_sizes 00:25:26.874 ************************************ 00:25:26.874 14:51:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:26.874 * Looking for test storage... 00:25:26.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:26.874 14:51:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:26.874 14:51:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:25:26.874 14:51:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:27.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.133 --rc genhtml_branch_coverage=1 00:25:27.133 --rc genhtml_function_coverage=1 00:25:27.133 --rc genhtml_legend=1 00:25:27.133 --rc geninfo_all_blocks=1 00:25:27.133 --rc geninfo_unexecuted_blocks=1 00:25:27.133 00:25:27.133 ' 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:27.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.133 --rc genhtml_branch_coverage=1 00:25:27.133 --rc genhtml_function_coverage=1 00:25:27.133 --rc genhtml_legend=1 00:25:27.133 --rc geninfo_all_blocks=1 00:25:27.133 --rc geninfo_unexecuted_blocks=1 00:25:27.133 00:25:27.133 ' 00:25:27.133 14:51:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:27.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.133 --rc genhtml_branch_coverage=1 00:25:27.133 --rc genhtml_function_coverage=1 00:25:27.133 --rc genhtml_legend=1 00:25:27.133 --rc geninfo_all_blocks=1 00:25:27.134 --rc geninfo_unexecuted_blocks=1 00:25:27.134 00:25:27.134 ' 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:27.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.134 --rc genhtml_branch_coverage=1 00:25:27.134 --rc genhtml_function_coverage=1 00:25:27.134 --rc genhtml_legend=1 00:25:27.134 --rc geninfo_all_blocks=1 00:25:27.134 --rc geninfo_unexecuted_blocks=1 00:25:27.134 00:25:27.134 ' 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:27.134 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:27.134 Cannot find device "nvmf_init_br" 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:27.134 Cannot find device "nvmf_init_br2" 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:27.134 Cannot find device "nvmf_tgt_br" 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:27.134 Cannot find device "nvmf_tgt_br2" 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:27.134 Cannot find device "nvmf_init_br" 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:27.134 Cannot find device "nvmf_init_br2" 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:27.134 Cannot find device "nvmf_tgt_br" 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:27.134 Cannot find device "nvmf_tgt_br2" 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:27.134 Cannot find device "nvmf_br" 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:27.134 Cannot find device "nvmf_init_if" 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:27.134 Cannot find device "nvmf_init_if2" 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:27.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:27.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:27.134 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:27.135 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:27.393 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:27.393 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:27.393 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:27.393 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:27.393 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:27.393 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:27.393 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:27.393 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:27.393 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:27.393 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:25:27.393 00:25:27.393 --- 10.0.0.3 ping statistics --- 00:25:27.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.393 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:25:27.393 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:27.393 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:27.393 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:25:27.393 00:25:27.393 --- 10.0.0.4 ping statistics --- 00:25:27.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.393 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:25:27.393 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:27.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:25:27.393 00:25:27.393 --- 10.0.0.1 ping statistics --- 00:25:27.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.393 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:25:27.393 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:27.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:25:27.393 00:25:27.393 --- 10.0.0.2 ping statistics --- 00:25:27.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.393 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:25:27.393 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.393 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:25:27.393 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:25:27.393 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:27.651 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:27.910 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:27.910 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:27.910 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.910 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:27.910 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:27.910 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.910 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:27.910 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:27.910 14:51:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:25:27.910 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:27.910 14:51:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:27.910 14:51:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:27.910 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=82837 00:25:27.910 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 82837 00:25:27.910 14:51:36 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:25:27.910 14:51:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 82837 ']' 00:25:27.910 14:51:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.910 14:51:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:27.910 14:51:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.910 14:51:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:27.910 14:51:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:27.910 [2024-11-04 14:51:36.981348] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:25:27.910 [2024-11-04 14:51:36.981532] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.168 [2024-11-04 14:51:37.119097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:28.168 [2024-11-04 14:51:37.155065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.168 [2024-11-04 14:51:37.155106] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.168 [2024-11-04 14:51:37.155113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.168 [2024-11-04 14:51:37.155118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.168 [2024-11-04 14:51:37.155122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.168 [2024-11-04 14:51:37.155843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.168 [2024-11-04 14:51:37.155906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.168 [2024-11-04 14:51:37.156898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:28.168 [2024-11-04 14:51:37.156907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.168 [2024-11-04 14:51:37.186535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:28.749 14:51:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:28.749 14:51:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:25:28.749 14:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:28.749 14:51:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:28.749 14:51:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:28.749 14:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.749 14:51:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:25:28.749 14:51:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:25:28.749 14:51:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:25:28.749 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:25:28.749 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:25:28.749 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:25:28.749 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:25:28.749 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:25:28.749 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:25:28.749 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:25:28.749 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:25:28.749 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:28.750 14:51:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:28.750 ************************************ 00:25:28.750 START TEST spdk_target_abort 00:25:28.750 ************************************ 00:25:28.750 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:25:28.750 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:25:28.750 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:25:28.750 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.750 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:29.007 spdk_targetn1 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:29.007 [2024-11-04 14:51:37.931219] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:29.007 [2024-11-04 14:51:37.966473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:29.007 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:32.283 Initializing NVMe Controllers 00:25:32.283 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:25:32.283 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:32.283 Initialization complete. Launching workers. 00:25:32.283 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15502, failed: 0 00:25:32.283 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1014, failed to submit 14488 00:25:32.283 success 701, unsuccessful 313, failed 0 00:25:32.283 14:51:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:32.283 14:51:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:35.691 Initializing NVMe Controllers 00:25:35.691 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:25:35.691 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:35.691 Initialization complete. Launching workers. 00:25:35.691 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8634, failed: 0 00:25:35.691 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1128, failed to submit 7506 00:25:35.691 success 376, unsuccessful 752, failed 0 00:25:35.691 14:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:35.691 14:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:38.978 Initializing NVMe Controllers 00:25:38.978 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:25:38.978 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:38.978 Initialization complete. Launching workers. 00:25:38.978 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35191, failed: 0 00:25:38.978 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2276, failed to submit 32915 00:25:38.978 success 504, unsuccessful 1772, failed 0 00:25:38.978 14:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:25:38.978 14:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.978 14:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:38.978 14:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.978 14:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:25:38.978 14:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.978 14:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:40.876 14:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.876 14:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 82837 00:25:40.876 14:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 82837 ']' 00:25:40.876 14:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 82837 00:25:40.876 14:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:25:40.876 14:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:40.876 14:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82837 00:25:41.135 killing process with pid 82837 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82837' 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 82837 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 82837 00:25:41.135 ************************************ 00:25:41.135 END TEST spdk_target_abort 00:25:41.135 ************************************ 00:25:41.135 00:25:41.135 real 0m12.294s 00:25:41.135 user 0m45.504s 00:25:41.135 sys 0m1.946s 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:41.135 14:51:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:25:41.135 14:51:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:41.135 14:51:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:41.135 14:51:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:41.135 ************************************ 00:25:41.135 START TEST kernel_target_abort 00:25:41.135 ************************************ 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:41.135 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:41.392 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:41.392 Waiting for block devices as requested 00:25:41.650 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:41.650 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:41.650 No valid GPT data, bailing 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:41.650 No valid GPT data, bailing 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:25:41.650 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:41.909 No valid GPT data, bailing 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:41.909 No valid GPT data, bailing 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa --hostid=0c7d476c-d4d7-4594-a48a-578d93697ffa -a 10.0.0.1 -t tcp -s 4420 00:25:41.909 00:25:41.909 Discovery Log Number of Records 2, Generation counter 2 00:25:41.909 =====Discovery Log Entry 0====== 00:25:41.909 trtype: tcp 00:25:41.909 adrfam: ipv4 00:25:41.909 subtype: current discovery subsystem 00:25:41.909 treq: not specified, sq flow control disable supported 00:25:41.909 portid: 1 00:25:41.909 trsvcid: 4420 00:25:41.909 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:41.909 traddr: 10.0.0.1 00:25:41.909 eflags: none 00:25:41.909 sectype: none 00:25:41.909 =====Discovery Log Entry 1====== 00:25:41.909 trtype: tcp 00:25:41.909 adrfam: ipv4 00:25:41.909 subtype: nvme subsystem 00:25:41.909 treq: not specified, sq flow control disable supported 00:25:41.909 portid: 1 00:25:41.909 trsvcid: 4420 00:25:41.909 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:41.909 traddr: 10.0.0.1 00:25:41.909 eflags: none 00:25:41.909 sectype: none 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:41.909 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:41.910 14:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:45.225 Initializing NVMe Controllers 00:25:45.225 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:45.225 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:45.225 Initialization complete. Launching workers. 00:25:45.225 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55234, failed: 0 00:25:45.225 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 55234, failed to submit 0 00:25:45.225 success 0, unsuccessful 55234, failed 0 00:25:45.225 14:51:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:45.225 14:51:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:48.505 Initializing NVMe Controllers 00:25:48.505 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:48.505 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:48.505 Initialization complete. Launching workers. 00:25:48.505 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 89910, failed: 0 00:25:48.505 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36615, failed to submit 53295 00:25:48.505 success 0, unsuccessful 36615, failed 0 00:25:48.505 14:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:48.505 14:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:52.681 Initializing NVMe Controllers 00:25:52.681 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:52.681 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:52.681 Initialization complete. Launching workers. 00:25:52.681 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92391, failed: 0 00:25:52.681 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23175, failed to submit 69216 00:25:52.681 success 0, unsuccessful 23175, failed 0 00:25:52.681 14:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:25:52.681 14:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:52.681 14:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:25:52.681 14:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:52.681 14:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:52.681 14:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:52.681 14:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:52.681 14:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:52.681 14:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:52.681 14:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:52.681 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:04.884 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:04.884 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:05.142 ************************************ 00:26:05.142 END TEST kernel_target_abort 00:26:05.142 ************************************ 00:26:05.142 00:26:05.142 real 0m23.855s 00:26:05.142 user 0m8.185s 00:26:05.142 sys 0m13.571s 00:26:05.142 14:52:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:05.142 14:52:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:05.142 14:52:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:05.142 14:52:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:26:05.142 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:05.142 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:26:05.142 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:05.142 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:26:05.142 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:05.142 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:05.142 rmmod nvme_tcp 00:26:05.142 rmmod nvme_fabrics 00:26:05.142 rmmod nvme_keyring 00:26:05.142 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:05.142 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:26:05.142 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:26:05.142 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 82837 ']' 00:26:05.142 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 82837 00:26:05.142 14:52:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 82837 ']' 00:26:05.142 14:52:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 82837 00:26:05.142 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (82837) - No such process 00:26:05.142 Process with pid 82837 is not found 00:26:05.142 14:52:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 82837 is not found' 00:26:05.142 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:26:05.142 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:05.400 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:05.400 Waiting for block devices as requested 00:26:05.658 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:05.658 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:05.658 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:05.658 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:05.658 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:26:05.658 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:26:05.658 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:26:05.658 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:05.658 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:05.658 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:05.658 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:05.658 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:05.658 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:05.658 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:05.658 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:05.658 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:05.658 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:05.658 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:05.658 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:05.917 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:05.917 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:05.917 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:05.917 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:05.917 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:05.917 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.917 14:52:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:05.917 14:52:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.917 14:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:26:05.917 00:26:05.917 real 0m39.031s 00:26:05.917 user 0m54.601s 00:26:05.917 sys 0m16.491s 00:26:05.917 14:52:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:05.917 14:52:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:05.917 ************************************ 00:26:05.917 END TEST nvmf_abort_qd_sizes 00:26:05.917 ************************************ 00:26:05.917 14:52:14 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:26:05.917 14:52:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:05.917 14:52:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:05.917 14:52:14 -- common/autotest_common.sh@10 -- # set +x 00:26:05.917 ************************************ 00:26:05.917 START TEST keyring_file 00:26:05.917 ************************************ 00:26:05.917 14:52:14 keyring_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:26:05.917 * Looking for test storage... 00:26:05.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:26:05.917 14:52:15 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:05.917 14:52:15 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:26:05.917 14:52:15 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:06.176 14:52:15 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@345 -- # : 1 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@353 -- # local d=1 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@355 -- # echo 1 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@353 -- # local d=2 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@355 -- # echo 2 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:06.176 14:52:15 keyring_file -- scripts/common.sh@368 -- # return 0 00:26:06.176 14:52:15 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:06.176 14:52:15 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:06.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.176 --rc genhtml_branch_coverage=1 00:26:06.176 --rc genhtml_function_coverage=1 00:26:06.176 --rc genhtml_legend=1 00:26:06.176 --rc geninfo_all_blocks=1 00:26:06.176 --rc geninfo_unexecuted_blocks=1 00:26:06.176 00:26:06.176 ' 00:26:06.176 14:52:15 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:06.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.176 --rc genhtml_branch_coverage=1 00:26:06.176 --rc genhtml_function_coverage=1 00:26:06.176 --rc genhtml_legend=1 00:26:06.176 --rc geninfo_all_blocks=1 00:26:06.176 --rc geninfo_unexecuted_blocks=1 00:26:06.176 00:26:06.176 ' 00:26:06.176 14:52:15 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:06.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.176 --rc genhtml_branch_coverage=1 00:26:06.176 --rc genhtml_function_coverage=1 00:26:06.176 --rc genhtml_legend=1 00:26:06.176 --rc geninfo_all_blocks=1 00:26:06.176 --rc geninfo_unexecuted_blocks=1 00:26:06.176 00:26:06.176 ' 00:26:06.176 14:52:15 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:06.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.176 --rc genhtml_branch_coverage=1 00:26:06.176 --rc genhtml_function_coverage=1 00:26:06.176 --rc genhtml_legend=1 00:26:06.176 --rc geninfo_all_blocks=1 00:26:06.176 --rc geninfo_unexecuted_blocks=1 00:26:06.176 00:26:06.176 ' 00:26:06.176 14:52:15 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:26:06.176 14:52:15 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:06.176 14:52:15 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:26:06.176 14:52:15 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:06.176 14:52:15 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:06.176 14:52:15 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:06.176 14:52:15 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:06.176 14:52:15 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:06.176 14:52:15 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:06.176 14:52:15 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:06.176 14:52:15 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:06.176 14:52:15 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:06.176 14:52:15 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:06.176 14:52:15 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:26:06.176 14:52:15 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:06.177 14:52:15 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:26:06.177 14:52:15 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.177 14:52:15 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.177 14:52:15 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.177 14:52:15 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.177 14:52:15 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.177 14:52:15 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.177 14:52:15 keyring_file -- paths/export.sh@5 -- # export PATH 00:26:06.177 14:52:15 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@51 -- # : 0 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:06.177 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:06.177 14:52:15 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:06.177 14:52:15 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:06.177 14:52:15 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:06.177 14:52:15 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:26:06.177 14:52:15 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:26:06.177 14:52:15 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:26:06.177 14:52:15 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:06.177 14:52:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:06.177 14:52:15 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:06.177 14:52:15 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:06.177 14:52:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:06.177 14:52:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:06.177 14:52:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.M5qV3oTOSo 00:26:06.177 14:52:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@733 -- # python - 00:26:06.177 14:52:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.M5qV3oTOSo 00:26:06.177 14:52:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.M5qV3oTOSo 00:26:06.177 14:52:15 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.M5qV3oTOSo 00:26:06.177 14:52:15 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:26:06.177 14:52:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:06.177 14:52:15 keyring_file -- keyring/common.sh@17 -- # name=key1 00:26:06.177 14:52:15 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:06.177 14:52:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:06.177 14:52:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:06.177 14:52:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.kEFIMUNEBL 00:26:06.177 14:52:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:26:06.177 14:52:15 keyring_file -- nvmf/common.sh@733 -- # python - 00:26:06.177 14:52:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kEFIMUNEBL 00:26:06.177 14:52:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.kEFIMUNEBL 00:26:06.177 14:52:15 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.kEFIMUNEBL 00:26:06.177 14:52:15 keyring_file -- keyring/file.sh@30 -- # tgtpid=83765 00:26:06.177 14:52:15 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:06.177 14:52:15 keyring_file -- keyring/file.sh@32 -- # waitforlisten 83765 00:26:06.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.177 14:52:15 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 83765 ']' 00:26:06.177 14:52:15 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.177 14:52:15 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:06.177 14:52:15 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.177 14:52:15 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:06.177 14:52:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:06.177 [2024-11-04 14:52:15.254470] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:26:06.177 [2024-11-04 14:52:15.254535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83765 ] 00:26:06.436 [2024-11-04 14:52:15.391022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.436 [2024-11-04 14:52:15.421900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.436 [2024-11-04 14:52:15.461698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:07.002 14:52:16 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:07.002 14:52:16 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:26:07.002 14:52:16 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:26:07.002 14:52:16 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.002 14:52:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:07.002 [2024-11-04 14:52:16.116847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.002 null0 00:26:07.261 [2024-11-04 14:52:16.148810] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:07.261 [2024-11-04 14:52:16.148939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.261 14:52:16 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:07.261 [2024-11-04 14:52:16.176803] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:26:07.261 request: 00:26:07.261 { 00:26:07.261 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:26:07.261 "secure_channel": false, 00:26:07.261 "listen_address": { 00:26:07.261 "trtype": "tcp", 00:26:07.261 "traddr": "127.0.0.1", 00:26:07.261 "trsvcid": "4420" 00:26:07.261 }, 00:26:07.261 "method": "nvmf_subsystem_add_listener", 00:26:07.261 "req_id": 1 00:26:07.261 } 00:26:07.261 Got JSON-RPC error response 00:26:07.261 response: 00:26:07.261 { 00:26:07.261 "code": -32602, 00:26:07.261 "message": "Invalid parameters" 00:26:07.261 } 00:26:07.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:07.261 14:52:16 keyring_file -- keyring/file.sh@47 -- # bperfpid=83778 00:26:07.261 14:52:16 keyring_file -- keyring/file.sh@49 -- # waitforlisten 83778 /var/tmp/bperf.sock 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 83778 ']' 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:07.261 14:52:16 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:26:07.261 14:52:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:07.261 [2024-11-04 14:52:16.221561] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:26:07.261 [2024-11-04 14:52:16.221758] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83778 ] 00:26:07.261 [2024-11-04 14:52:16.362103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.519 [2024-11-04 14:52:16.402960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.519 [2024-11-04 14:52:16.437392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:08.086 14:52:17 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:08.086 14:52:17 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:26:08.086 14:52:17 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.M5qV3oTOSo 00:26:08.086 14:52:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.M5qV3oTOSo 00:26:08.344 14:52:17 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.kEFIMUNEBL 00:26:08.344 14:52:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.kEFIMUNEBL 00:26:08.602 14:52:17 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:26:08.602 14:52:17 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:26:08.602 14:52:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:08.602 14:52:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:08.602 14:52:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:08.602 14:52:17 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.M5qV3oTOSo == \/\t\m\p\/\t\m\p\.\M\5\q\V\3\o\T\O\S\o ]] 00:26:08.602 14:52:17 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:26:08.602 14:52:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:08.602 14:52:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:08.602 14:52:17 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:26:08.602 14:52:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:08.859 14:52:17 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.kEFIMUNEBL == \/\t\m\p\/\t\m\p\.\k\E\F\I\M\U\N\E\B\L ]] 00:26:08.860 14:52:17 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:26:08.860 14:52:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:08.860 14:52:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:08.860 14:52:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:08.860 14:52:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:08.860 14:52:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:09.117 14:52:18 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:26:09.117 14:52:18 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:26:09.117 14:52:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:09.117 14:52:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:09.117 14:52:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:09.117 14:52:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:09.117 14:52:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:09.375 14:52:18 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:26:09.375 14:52:18 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:09.375 14:52:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:09.375 [2024-11-04 14:52:18.460469] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:09.633 nvme0n1 00:26:09.633 14:52:18 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:26:09.633 14:52:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:09.633 14:52:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:09.633 14:52:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:09.633 14:52:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:09.633 14:52:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:09.633 14:52:18 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:26:09.633 14:52:18 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:26:09.633 14:52:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:09.633 14:52:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:09.633 14:52:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:09.633 14:52:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:09.633 14:52:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:09.891 14:52:18 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:26:09.891 14:52:18 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:10.150 Running I/O for 1 seconds... 00:26:11.084 20800.00 IOPS, 81.25 MiB/s 00:26:11.084 Latency(us) 00:26:11.084 [2024-11-04T14:52:20.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.084 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:26:11.084 nvme0n1 : 1.00 20833.93 81.38 0.00 0.00 6130.50 3276.80 16636.06 00:26:11.084 [2024-11-04T14:52:20.224Z] =================================================================================================================== 00:26:11.084 [2024-11-04T14:52:20.224Z] Total : 20833.93 81.38 0.00 0.00 6130.50 3276.80 16636.06 00:26:11.084 { 00:26:11.084 "results": [ 00:26:11.084 { 00:26:11.084 "job": "nvme0n1", 00:26:11.084 "core_mask": "0x2", 00:26:11.084 "workload": "randrw", 00:26:11.084 "percentage": 50, 00:26:11.084 "status": "finished", 00:26:11.084 "queue_depth": 128, 00:26:11.084 "io_size": 4096, 00:26:11.084 "runtime": 1.004563, 00:26:11.084 "iops": 20833.934755709695, 00:26:11.084 "mibps": 81.382557639491, 00:26:11.084 "io_failed": 0, 00:26:11.084 "io_timeout": 0, 00:26:11.084 "avg_latency_us": 6130.498165739845, 00:26:11.084 "min_latency_us": 3276.8, 00:26:11.084 "max_latency_us": 16636.061538461538 00:26:11.084 } 00:26:11.084 ], 00:26:11.084 "core_count": 1 00:26:11.084 } 00:26:11.084 14:52:20 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:11.084 14:52:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:11.341 14:52:20 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:26:11.341 14:52:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:11.341 14:52:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:11.341 14:52:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:11.341 14:52:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:11.341 14:52:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:11.599 14:52:20 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:26:11.599 14:52:20 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:26:11.599 14:52:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:11.599 14:52:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:11.599 14:52:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:11.599 14:52:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:11.599 14:52:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:11.599 14:52:20 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:26:11.599 14:52:20 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:11.599 14:52:20 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:26:11.599 14:52:20 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:11.599 14:52:20 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:26:11.599 14:52:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:11.599 14:52:20 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:26:11.599 14:52:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:11.599 14:52:20 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:11.599 14:52:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:11.856 [2024-11-04 14:52:20.860665] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:11.856 [2024-11-04 14:52:20.861054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ce770 (107): Transport endpoint is not connected 00:26:11.856 [2024-11-04 14:52:20.862042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ce770 (9): Bad file descriptor 00:26:11.856 [2024-11-04 14:52:20.863040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:26:11.856 [2024-11-04 14:52:20.863097] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:26:11.856 [2024-11-04 14:52:20.863138] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:26:11.856 [2024-11-04 14:52:20.863173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:26:11.856 request: 00:26:11.856 { 00:26:11.856 "name": "nvme0", 00:26:11.856 "trtype": "tcp", 00:26:11.856 "traddr": "127.0.0.1", 00:26:11.856 "adrfam": "ipv4", 00:26:11.856 "trsvcid": "4420", 00:26:11.856 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:11.856 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:11.856 "prchk_reftag": false, 00:26:11.856 "prchk_guard": false, 00:26:11.856 "hdgst": false, 00:26:11.856 "ddgst": false, 00:26:11.856 "psk": "key1", 00:26:11.856 "allow_unrecognized_csi": false, 00:26:11.856 "method": "bdev_nvme_attach_controller", 00:26:11.856 "req_id": 1 00:26:11.856 } 00:26:11.856 Got JSON-RPC error response 00:26:11.856 response: 00:26:11.856 { 00:26:11.856 "code": -5, 00:26:11.856 "message": "Input/output error" 00:26:11.856 } 00:26:11.856 14:52:20 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:26:11.856 14:52:20 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:11.856 14:52:20 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:11.856 14:52:20 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:11.856 14:52:20 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:26:11.856 14:52:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:11.856 14:52:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:11.856 14:52:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:11.856 14:52:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:11.856 14:52:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:12.112 14:52:21 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:26:12.112 14:52:21 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:26:12.112 14:52:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:12.112 14:52:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:12.112 14:52:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:12.112 14:52:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:12.112 14:52:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:12.369 14:52:21 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:26:12.369 14:52:21 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:26:12.369 14:52:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:12.369 14:52:21 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:26:12.369 14:52:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:26:12.627 14:52:21 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:26:12.627 14:52:21 keyring_file -- keyring/file.sh@78 -- # jq length 00:26:12.627 14:52:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:12.884 14:52:21 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:26:12.884 14:52:21 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.M5qV3oTOSo 00:26:12.884 14:52:21 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.M5qV3oTOSo 00:26:12.884 14:52:21 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:26:12.884 14:52:21 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.M5qV3oTOSo 00:26:12.884 14:52:21 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:26:12.884 14:52:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:12.884 14:52:21 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:26:12.884 14:52:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:12.884 14:52:21 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.M5qV3oTOSo 00:26:12.884 14:52:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.M5qV3oTOSo 00:26:13.141 [2024-11-04 14:52:22.105408] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.M5qV3oTOSo': 0100660 00:26:13.141 [2024-11-04 14:52:22.105710] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:13.141 request: 00:26:13.141 { 00:26:13.141 "name": "key0", 00:26:13.141 "path": "/tmp/tmp.M5qV3oTOSo", 00:26:13.141 "method": "keyring_file_add_key", 00:26:13.141 "req_id": 1 00:26:13.141 } 00:26:13.141 Got JSON-RPC error response 00:26:13.141 response: 00:26:13.141 { 00:26:13.141 "code": -1, 00:26:13.141 "message": "Operation not permitted" 00:26:13.141 } 00:26:13.141 14:52:22 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:26:13.141 14:52:22 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:13.141 14:52:22 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:13.141 14:52:22 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:13.141 14:52:22 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.M5qV3oTOSo 00:26:13.141 14:52:22 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.M5qV3oTOSo 00:26:13.141 14:52:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.M5qV3oTOSo 00:26:13.400 14:52:22 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.M5qV3oTOSo 00:26:13.400 14:52:22 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:26:13.400 14:52:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:13.400 14:52:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:13.400 14:52:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:13.400 14:52:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:13.400 14:52:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:13.400 14:52:22 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:26:13.400 14:52:22 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:13.400 14:52:22 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:26:13.400 14:52:22 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:13.400 14:52:22 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:26:13.400 14:52:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:13.400 14:52:22 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:26:13.400 14:52:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:13.400 14:52:22 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:13.400 14:52:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:13.657 [2024-11-04 14:52:22.649545] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.M5qV3oTOSo': No such file or directory 00:26:13.657 [2024-11-04 14:52:22.649735] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:26:13.657 [2024-11-04 14:52:22.649798] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:26:13.657 [2024-11-04 14:52:22.649839] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:26:13.657 [2024-11-04 14:52:22.649876] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:13.657 [2024-11-04 14:52:22.649910] bdev_nvme.c:6667:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:26:13.657 request: 00:26:13.657 { 00:26:13.657 "name": "nvme0", 00:26:13.657 "trtype": "tcp", 00:26:13.657 "traddr": "127.0.0.1", 00:26:13.657 "adrfam": "ipv4", 00:26:13.657 "trsvcid": "4420", 00:26:13.657 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:13.657 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:13.657 "prchk_reftag": false, 00:26:13.657 "prchk_guard": false, 00:26:13.657 "hdgst": false, 00:26:13.657 "ddgst": false, 00:26:13.657 "psk": "key0", 00:26:13.657 "allow_unrecognized_csi": false, 00:26:13.657 "method": "bdev_nvme_attach_controller", 00:26:13.657 "req_id": 1 00:26:13.657 } 00:26:13.657 Got JSON-RPC error response 00:26:13.657 response: 00:26:13.657 { 00:26:13.657 "code": -19, 00:26:13.657 "message": "No such device" 00:26:13.657 } 00:26:13.657 14:52:22 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:26:13.657 14:52:22 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:13.657 14:52:22 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:13.657 14:52:22 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:13.657 14:52:22 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:26:13.657 14:52:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:13.914 14:52:22 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:13.914 14:52:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:13.914 14:52:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:13.914 14:52:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:13.914 14:52:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:13.914 14:52:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:13.914 14:52:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.l7tjKXc9hw 00:26:13.914 14:52:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:13.914 14:52:22 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:13.914 14:52:22 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:26:13.914 14:52:22 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:13.914 14:52:22 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:26:13.914 14:52:22 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:26:13.914 14:52:22 keyring_file -- nvmf/common.sh@733 -- # python - 00:26:13.914 14:52:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.l7tjKXc9hw 00:26:13.914 14:52:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.l7tjKXc9hw 00:26:13.914 14:52:22 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.l7tjKXc9hw 00:26:13.914 14:52:22 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.l7tjKXc9hw 00:26:13.914 14:52:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.l7tjKXc9hw 00:26:14.172 14:52:23 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:14.172 14:52:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:14.429 nvme0n1 00:26:14.429 14:52:23 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:26:14.429 14:52:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:14.429 14:52:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:14.430 14:52:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:14.430 14:52:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:14.430 14:52:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:14.687 14:52:23 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:26:14.687 14:52:23 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:26:14.687 14:52:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:14.687 14:52:23 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:26:14.687 14:52:23 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:26:14.687 14:52:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:14.687 14:52:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:14.687 14:52:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:14.944 14:52:23 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:26:14.944 14:52:23 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:26:14.944 14:52:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:14.944 14:52:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:14.944 14:52:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:14.944 14:52:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:14.945 14:52:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:15.202 14:52:24 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:26:15.202 14:52:24 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:15.202 14:52:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:15.459 14:52:24 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:26:15.459 14:52:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:15.459 14:52:24 keyring_file -- keyring/file.sh@105 -- # jq length 00:26:15.459 14:52:24 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:26:15.459 14:52:24 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.l7tjKXc9hw 00:26:15.459 14:52:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.l7tjKXc9hw 00:26:15.716 14:52:24 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.kEFIMUNEBL 00:26:15.716 14:52:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.kEFIMUNEBL 00:26:15.974 14:52:24 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:15.974 14:52:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:16.231 nvme0n1 00:26:16.231 14:52:25 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:26:16.231 14:52:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:26:16.489 14:52:25 keyring_file -- keyring/file.sh@113 -- # config='{ 00:26:16.489 "subsystems": [ 00:26:16.489 { 00:26:16.489 "subsystem": "keyring", 00:26:16.489 "config": [ 00:26:16.489 { 00:26:16.489 "method": "keyring_file_add_key", 00:26:16.489 "params": { 00:26:16.489 "name": "key0", 00:26:16.489 "path": "/tmp/tmp.l7tjKXc9hw" 00:26:16.489 } 00:26:16.489 }, 00:26:16.489 { 00:26:16.489 "method": "keyring_file_add_key", 00:26:16.489 "params": { 00:26:16.489 "name": "key1", 00:26:16.489 "path": "/tmp/tmp.kEFIMUNEBL" 00:26:16.489 } 00:26:16.489 } 00:26:16.489 ] 00:26:16.489 }, 00:26:16.489 { 00:26:16.489 "subsystem": "iobuf", 00:26:16.489 "config": [ 00:26:16.489 { 00:26:16.489 "method": "iobuf_set_options", 00:26:16.489 "params": { 00:26:16.489 "small_pool_count": 8192, 00:26:16.489 "large_pool_count": 1024, 00:26:16.489 "small_bufsize": 8192, 00:26:16.489 "large_bufsize": 135168, 00:26:16.489 "enable_numa": false 00:26:16.489 } 00:26:16.489 } 00:26:16.489 ] 00:26:16.489 }, 00:26:16.489 { 00:26:16.489 "subsystem": "sock", 00:26:16.489 "config": [ 00:26:16.489 { 00:26:16.489 "method": "sock_set_default_impl", 00:26:16.489 "params": { 00:26:16.489 "impl_name": "uring" 00:26:16.489 } 00:26:16.489 }, 00:26:16.489 { 00:26:16.489 "method": "sock_impl_set_options", 00:26:16.489 "params": { 00:26:16.489 "impl_name": "ssl", 00:26:16.489 "recv_buf_size": 4096, 00:26:16.489 "send_buf_size": 4096, 00:26:16.489 "enable_recv_pipe": true, 00:26:16.489 "enable_quickack": false, 00:26:16.489 "enable_placement_id": 0, 00:26:16.489 "enable_zerocopy_send_server": true, 00:26:16.489 "enable_zerocopy_send_client": false, 00:26:16.489 "zerocopy_threshold": 0, 00:26:16.489 "tls_version": 0, 00:26:16.490 "enable_ktls": false 00:26:16.490 } 00:26:16.490 }, 00:26:16.490 { 00:26:16.490 "method": "sock_impl_set_options", 00:26:16.490 "params": { 00:26:16.490 "impl_name": "posix", 00:26:16.490 "recv_buf_size": 2097152, 00:26:16.490 "send_buf_size": 2097152, 00:26:16.490 "enable_recv_pipe": true, 00:26:16.490 "enable_quickack": false, 00:26:16.490 "enable_placement_id": 0, 00:26:16.490 "enable_zerocopy_send_server": true, 00:26:16.490 "enable_zerocopy_send_client": false, 00:26:16.490 "zerocopy_threshold": 0, 00:26:16.490 "tls_version": 0, 00:26:16.490 "enable_ktls": false 00:26:16.490 } 00:26:16.490 }, 00:26:16.490 { 00:26:16.490 "method": "sock_impl_set_options", 00:26:16.490 "params": { 00:26:16.490 "impl_name": "uring", 00:26:16.490 "recv_buf_size": 2097152, 00:26:16.490 "send_buf_size": 2097152, 00:26:16.490 "enable_recv_pipe": true, 00:26:16.490 "enable_quickack": false, 00:26:16.490 "enable_placement_id": 0, 00:26:16.490 "enable_zerocopy_send_server": false, 00:26:16.490 "enable_zerocopy_send_client": false, 00:26:16.490 "zerocopy_threshold": 0, 00:26:16.490 "tls_version": 0, 00:26:16.490 "enable_ktls": false 00:26:16.490 } 00:26:16.490 } 00:26:16.490 ] 00:26:16.490 }, 00:26:16.490 { 00:26:16.490 "subsystem": "vmd", 00:26:16.490 "config": [] 00:26:16.490 }, 00:26:16.490 { 00:26:16.490 "subsystem": "accel", 00:26:16.490 "config": [ 00:26:16.490 { 00:26:16.490 "method": "accel_set_options", 00:26:16.490 "params": { 00:26:16.490 "small_cache_size": 128, 00:26:16.490 "large_cache_size": 16, 00:26:16.490 "task_count": 2048, 00:26:16.490 "sequence_count": 2048, 00:26:16.490 "buf_count": 2048 00:26:16.490 } 00:26:16.490 } 00:26:16.490 ] 00:26:16.490 }, 00:26:16.490 { 00:26:16.490 "subsystem": "bdev", 00:26:16.490 "config": [ 00:26:16.490 { 00:26:16.490 "method": "bdev_set_options", 00:26:16.490 "params": { 00:26:16.490 "bdev_io_pool_size": 65535, 00:26:16.490 "bdev_io_cache_size": 256, 00:26:16.490 "bdev_auto_examine": true, 00:26:16.490 "iobuf_small_cache_size": 128, 00:26:16.490 "iobuf_large_cache_size": 16 00:26:16.490 } 00:26:16.490 }, 00:26:16.490 { 00:26:16.490 "method": "bdev_raid_set_options", 00:26:16.490 "params": { 00:26:16.490 "process_window_size_kb": 1024, 00:26:16.490 "process_max_bandwidth_mb_sec": 0 00:26:16.490 } 00:26:16.490 }, 00:26:16.490 { 00:26:16.490 "method": "bdev_iscsi_set_options", 00:26:16.490 "params": { 00:26:16.490 "timeout_sec": 30 00:26:16.490 } 00:26:16.490 }, 00:26:16.490 { 00:26:16.490 "method": "bdev_nvme_set_options", 00:26:16.490 "params": { 00:26:16.490 "action_on_timeout": "none", 00:26:16.490 "timeout_us": 0, 00:26:16.490 "timeout_admin_us": 0, 00:26:16.490 "keep_alive_timeout_ms": 10000, 00:26:16.490 "arbitration_burst": 0, 00:26:16.490 "low_priority_weight": 0, 00:26:16.490 "medium_priority_weight": 0, 00:26:16.490 "high_priority_weight": 0, 00:26:16.490 "nvme_adminq_poll_period_us": 10000, 00:26:16.490 "nvme_ioq_poll_period_us": 0, 00:26:16.490 "io_queue_requests": 512, 00:26:16.490 "delay_cmd_submit": true, 00:26:16.490 "transport_retry_count": 4, 00:26:16.490 "bdev_retry_count": 3, 00:26:16.490 "transport_ack_timeout": 0, 00:26:16.490 "ctrlr_loss_timeout_sec": 0, 00:26:16.490 "reconnect_delay_sec": 0, 00:26:16.490 "fast_io_fail_timeout_sec": 0, 00:26:16.490 "disable_auto_failback": false, 00:26:16.490 "generate_uuids": false, 00:26:16.490 "transport_tos": 0, 00:26:16.490 "nvme_error_stat": false, 00:26:16.490 "rdma_srq_size": 0, 00:26:16.490 "io_path_stat": false, 00:26:16.490 "allow_accel_sequence": false, 00:26:16.490 "rdma_max_cq_size": 0, 00:26:16.490 "rdma_cm_event_timeout_ms": 0, 00:26:16.490 "dhchap_digests": [ 00:26:16.490 "sha256", 00:26:16.490 "sha384", 00:26:16.490 "sha512" 00:26:16.490 ], 00:26:16.490 "dhchap_dhgroups": [ 00:26:16.490 "null", 00:26:16.490 "ffdhe2048", 00:26:16.490 "ffdhe3072", 00:26:16.490 "ffdhe4096", 00:26:16.490 "ffdhe6144", 00:26:16.490 "ffdhe8192" 00:26:16.490 ] 00:26:16.490 } 00:26:16.490 }, 00:26:16.490 { 00:26:16.490 "method": "bdev_nvme_attach_controller", 00:26:16.490 "params": { 00:26:16.490 "name": "nvme0", 00:26:16.490 "trtype": "TCP", 00:26:16.490 "adrfam": "IPv4", 00:26:16.490 "traddr": "127.0.0.1", 00:26:16.490 "trsvcid": "4420", 00:26:16.490 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:16.490 "prchk_reftag": false, 00:26:16.490 "prchk_guard": false, 00:26:16.490 "ctrlr_loss_timeout_sec": 0, 00:26:16.490 "reconnect_delay_sec": 0, 00:26:16.490 "fast_io_fail_timeout_sec": 0, 00:26:16.490 "psk": "key0", 00:26:16.490 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:16.490 "hdgst": false, 00:26:16.490 "ddgst": false, 00:26:16.490 "multipath": "multipath" 00:26:16.490 } 00:26:16.490 }, 00:26:16.490 { 00:26:16.490 "method": "bdev_nvme_set_hotplug", 00:26:16.490 "params": { 00:26:16.490 "period_us": 100000, 00:26:16.490 "enable": false 00:26:16.490 } 00:26:16.490 }, 00:26:16.490 { 00:26:16.490 "method": "bdev_wait_for_examine" 00:26:16.490 } 00:26:16.490 ] 00:26:16.490 }, 00:26:16.490 { 00:26:16.490 "subsystem": "nbd", 00:26:16.490 "config": [] 00:26:16.490 } 00:26:16.490 ] 00:26:16.490 }' 00:26:16.490 14:52:25 keyring_file -- keyring/file.sh@115 -- # killprocess 83778 00:26:16.490 14:52:25 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 83778 ']' 00:26:16.490 14:52:25 keyring_file -- common/autotest_common.sh@956 -- # kill -0 83778 00:26:16.490 14:52:25 keyring_file -- common/autotest_common.sh@957 -- # uname 00:26:16.490 14:52:25 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:16.490 14:52:25 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83778 00:26:16.490 killing process with pid 83778 00:26:16.490 Received shutdown signal, test time was about 1.000000 seconds 00:26:16.490 00:26:16.490 Latency(us) 00:26:16.490 [2024-11-04T14:52:25.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.490 [2024-11-04T14:52:25.630Z] =================================================================================================================== 00:26:16.490 [2024-11-04T14:52:25.630Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:16.490 14:52:25 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:16.490 14:52:25 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:16.490 14:52:25 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83778' 00:26:16.490 14:52:25 keyring_file -- common/autotest_common.sh@971 -- # kill 83778 00:26:16.490 14:52:25 keyring_file -- common/autotest_common.sh@976 -- # wait 83778 00:26:16.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:16.490 14:52:25 keyring_file -- keyring/file.sh@118 -- # bperfpid=84011 00:26:16.490 14:52:25 keyring_file -- keyring/file.sh@120 -- # waitforlisten 84011 /var/tmp/bperf.sock 00:26:16.490 14:52:25 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 84011 ']' 00:26:16.490 14:52:25 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:16.490 14:52:25 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:16.490 14:52:25 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:26:16.490 14:52:25 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:16.490 14:52:25 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:16.490 14:52:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:16.490 14:52:25 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:26:16.490 "subsystems": [ 00:26:16.490 { 00:26:16.490 "subsystem": "keyring", 00:26:16.490 "config": [ 00:26:16.490 { 00:26:16.490 "method": "keyring_file_add_key", 00:26:16.490 "params": { 00:26:16.490 "name": "key0", 00:26:16.490 "path": "/tmp/tmp.l7tjKXc9hw" 00:26:16.490 } 00:26:16.490 }, 00:26:16.490 { 00:26:16.490 "method": "keyring_file_add_key", 00:26:16.490 "params": { 00:26:16.490 "name": "key1", 00:26:16.490 "path": "/tmp/tmp.kEFIMUNEBL" 00:26:16.490 } 00:26:16.490 } 00:26:16.490 ] 00:26:16.490 }, 00:26:16.490 { 00:26:16.490 "subsystem": "iobuf", 00:26:16.490 "config": [ 00:26:16.490 { 00:26:16.490 "method": "iobuf_set_options", 00:26:16.490 "params": { 00:26:16.490 "small_pool_count": 8192, 00:26:16.491 "large_pool_count": 1024, 00:26:16.491 "small_bufsize": 8192, 00:26:16.491 "large_bufsize": 135168, 00:26:16.491 "enable_numa": false 00:26:16.491 } 00:26:16.491 } 00:26:16.491 ] 00:26:16.491 }, 00:26:16.491 { 00:26:16.491 "subsystem": "sock", 00:26:16.491 "config": [ 00:26:16.491 { 00:26:16.491 "method": "sock_set_default_impl", 00:26:16.491 "params": { 00:26:16.491 "impl_name": "uring" 00:26:16.491 } 00:26:16.491 }, 00:26:16.491 { 00:26:16.491 "method": "sock_impl_set_options", 00:26:16.491 "params": { 00:26:16.491 "impl_name": "ssl", 00:26:16.491 "recv_buf_size": 4096, 00:26:16.491 "send_buf_size": 4096, 00:26:16.491 "enable_recv_pipe": true, 00:26:16.491 "enable_quickack": false, 00:26:16.491 "enable_placement_id": 0, 00:26:16.491 "enable_zerocopy_send_server": true, 00:26:16.491 "enable_zerocopy_send_client": false, 00:26:16.491 "zerocopy_threshold": 0, 00:26:16.491 "tls_version": 0, 00:26:16.491 "enable_ktls": false 00:26:16.491 } 00:26:16.491 }, 00:26:16.491 { 00:26:16.491 "method": "sock_impl_set_options", 00:26:16.491 "params": { 00:26:16.491 "impl_name": "posix", 00:26:16.491 "recv_buf_size": 2097152, 00:26:16.491 "send_buf_size": 2097152, 00:26:16.491 "enable_recv_pipe": true, 00:26:16.491 "enable_quickack": false, 00:26:16.491 "enable_placement_id": 0, 00:26:16.491 "enable_zerocopy_send_server": true, 00:26:16.491 "enable_zerocopy_send_client": false, 00:26:16.491 "zerocopy_threshold": 0, 00:26:16.491 "tls_version": 0, 00:26:16.491 "enable_ktls": false 00:26:16.491 } 00:26:16.491 }, 00:26:16.491 { 00:26:16.491 "method": "sock_impl_set_options", 00:26:16.491 "params": { 00:26:16.491 "impl_name": "uring", 00:26:16.491 "recv_buf_size": 2097152, 00:26:16.491 "send_buf_size": 2097152, 00:26:16.491 "enable_recv_pipe": true, 00:26:16.491 "enable_quickack": false, 00:26:16.491 "enable_placement_id": 0, 00:26:16.491 "enable_zerocopy_send_server": false, 00:26:16.491 "enable_zerocopy_send_client": false, 00:26:16.491 "zerocopy_threshold": 0, 00:26:16.491 "tls_version": 0, 00:26:16.491 "enable_ktls": false 00:26:16.491 } 00:26:16.491 } 00:26:16.491 ] 00:26:16.491 }, 00:26:16.491 { 00:26:16.491 "subsystem": "vmd", 00:26:16.491 "config": [] 00:26:16.491 }, 00:26:16.491 { 00:26:16.491 "subsystem": "accel", 00:26:16.491 "config": [ 00:26:16.491 { 00:26:16.491 "method": "accel_set_options", 00:26:16.491 "params": { 00:26:16.491 "small_cache_size": 128, 00:26:16.491 "large_cache_size": 16, 00:26:16.491 "task_count": 2048, 00:26:16.491 "sequence_count": 2048, 00:26:16.491 "buf_count": 2048 00:26:16.491 } 00:26:16.491 } 00:26:16.491 ] 00:26:16.491 }, 00:26:16.491 { 00:26:16.491 "subsystem": "bdev", 00:26:16.491 "config": [ 00:26:16.491 { 00:26:16.491 "method": "bdev_set_options", 00:26:16.491 "params": { 00:26:16.491 "bdev_io_pool_size": 65535, 00:26:16.491 "bdev_io_cache_size": 256, 00:26:16.491 "bdev_auto_examine": true, 00:26:16.491 "iobuf_small_cache_size": 128, 00:26:16.491 "iobuf_large_cache_size": 16 00:26:16.491 } 00:26:16.491 }, 00:26:16.491 { 00:26:16.491 "method": "bdev_raid_set_options", 00:26:16.491 "params": { 00:26:16.491 "process_window_size_kb": 1024, 00:26:16.491 "process_max_bandwidth_mb_sec": 0 00:26:16.491 } 00:26:16.491 }, 00:26:16.491 { 00:26:16.491 "method": "bdev_iscsi_set_options", 00:26:16.491 "params": { 00:26:16.491 "timeout_sec": 30 00:26:16.491 } 00:26:16.491 }, 00:26:16.491 { 00:26:16.491 "method": "bdev_nvme_set_options", 00:26:16.491 "params": { 00:26:16.491 "action_on_timeout": "none", 00:26:16.491 "timeout_us": 0, 00:26:16.491 "timeout_admin_us": 0, 00:26:16.491 "keep_alive_timeout_ms": 10000, 00:26:16.491 "arbitration_burst": 0, 00:26:16.491 "low_priority_weight": 0, 00:26:16.491 "medium_priority_weight": 0, 00:26:16.491 "high_priority_weight": 0, 00:26:16.491 "nvme_adminq_poll_period_us": 10000, 00:26:16.491 "nvme_ioq_poll_period_us": 0, 00:26:16.491 "io_queue_requests": 512, 00:26:16.491 "delay_cmd_submit": true, 00:26:16.491 "transport_retry_count": 4, 00:26:16.491 "bdev_retry_count": 3, 00:26:16.491 "transport_ack_timeout": 0, 00:26:16.491 "ctrlr_loss_timeout_sec": 0, 00:26:16.491 "reconnect_delay_sec": 0, 00:26:16.491 "fast_io_fail_timeout_sec": 0, 00:26:16.491 "disable_auto_failback": false, 00:26:16.491 "generate_uuids": false, 00:26:16.491 "transport_tos": 0, 00:26:16.491 "nvme_error_stat": false, 00:26:16.491 "rdma_srq_size": 0, 00:26:16.491 "io_path_stat": false, 00:26:16.491 "allow_accel_sequence": false, 00:26:16.491 "rdma_max_cq_size": 0, 00:26:16.491 "rdma_cm_event_timeout_ms": 0, 00:26:16.491 "dhchap_digests": [ 00:26:16.491 "sha256", 00:26:16.491 "sha384", 00:26:16.491 "sha512" 00:26:16.491 ], 00:26:16.491 "dhchap_dhgroups": [ 00:26:16.491 "null", 00:26:16.491 "ffdhe2048", 00:26:16.491 "ffdhe3072", 00:26:16.491 "ffdhe4096", 00:26:16.491 "ffdhe6144", 00:26:16.491 "ffdhe8192" 00:26:16.491 ] 00:26:16.491 } 00:26:16.491 }, 00:26:16.491 { 00:26:16.491 "method": "bdev_nvme_attach_controller", 00:26:16.491 "params": { 00:26:16.491 "name": "nvme0", 00:26:16.491 "trtype": "TCP", 00:26:16.491 "adrfam": "IPv4", 00:26:16.491 "traddr": "127.0.0.1", 00:26:16.491 "trsvcid": "4420", 00:26:16.491 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:16.491 "prchk_reftag": false, 00:26:16.491 "prchk_guard": false, 00:26:16.491 "ctrlr_loss_timeout_sec": 0, 00:26:16.491 "reconnect_delay_sec": 0, 00:26:16.491 "fast_io_fail_timeout_sec": 0, 00:26:16.491 "psk": "key0", 00:26:16.491 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:16.491 "hdgst": false, 00:26:16.491 "ddgst": false, 00:26:16.491 "multipath": "multipath" 00:26:16.491 } 00:26:16.491 }, 00:26:16.491 { 00:26:16.491 "method": "bdev_nvme_set_hotplug", 00:26:16.491 "params": { 00:26:16.491 "period_us": 100000, 00:26:16.491 "enable": false 00:26:16.491 } 00:26:16.491 }, 00:26:16.491 { 00:26:16.491 "method": "bdev_wait_for_examine" 00:26:16.491 } 00:26:16.491 ] 00:26:16.491 }, 00:26:16.491 { 00:26:16.492 "subsystem": "nbd", 00:26:16.492 "config": [] 00:26:16.492 } 00:26:16.492 ] 00:26:16.492 }' 00:26:16.492 [2024-11-04 14:52:25.625458] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:26:16.492 [2024-11-04 14:52:25.625694] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84011 ] 00:26:16.769 [2024-11-04 14:52:25.753980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.769 [2024-11-04 14:52:25.784023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.769 [2024-11-04 14:52:25.892721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:17.034 [2024-11-04 14:52:25.934079] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:17.598 14:52:26 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:17.598 14:52:26 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:26:17.598 14:52:26 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:26:17.598 14:52:26 keyring_file -- keyring/file.sh@121 -- # jq length 00:26:17.598 14:52:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:17.598 14:52:26 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:26:17.598 14:52:26 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:26:17.598 14:52:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:17.598 14:52:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:17.598 14:52:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:17.598 14:52:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:17.598 14:52:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:17.855 14:52:26 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:26:17.855 14:52:26 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:26:17.855 14:52:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:17.855 14:52:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:17.855 14:52:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:17.855 14:52:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:17.855 14:52:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:18.113 14:52:27 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:26:18.113 14:52:27 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:26:18.113 14:52:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:26:18.113 14:52:27 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:26:18.371 14:52:27 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:26:18.371 14:52:27 keyring_file -- keyring/file.sh@1 -- # cleanup 00:26:18.371 14:52:27 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.l7tjKXc9hw /tmp/tmp.kEFIMUNEBL 00:26:18.371 14:52:27 keyring_file -- keyring/file.sh@20 -- # killprocess 84011 00:26:18.371 14:52:27 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 84011 ']' 00:26:18.372 14:52:27 keyring_file -- common/autotest_common.sh@956 -- # kill -0 84011 00:26:18.372 14:52:27 keyring_file -- common/autotest_common.sh@957 -- # uname 00:26:18.372 14:52:27 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:18.372 14:52:27 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84011 00:26:18.372 killing process with pid 84011 00:26:18.372 Received shutdown signal, test time was about 1.000000 seconds 00:26:18.372 00:26:18.372 Latency(us) 00:26:18.372 [2024-11-04T14:52:27.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.372 [2024-11-04T14:52:27.512Z] =================================================================================================================== 00:26:18.372 [2024-11-04T14:52:27.512Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:18.372 14:52:27 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:18.372 14:52:27 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:18.372 14:52:27 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84011' 00:26:18.372 14:52:27 keyring_file -- common/autotest_common.sh@971 -- # kill 84011 00:26:18.372 14:52:27 keyring_file -- common/autotest_common.sh@976 -- # wait 84011 00:26:18.372 14:52:27 keyring_file -- keyring/file.sh@21 -- # killprocess 83765 00:26:18.372 14:52:27 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 83765 ']' 00:26:18.372 14:52:27 keyring_file -- common/autotest_common.sh@956 -- # kill -0 83765 00:26:18.372 14:52:27 keyring_file -- common/autotest_common.sh@957 -- # uname 00:26:18.372 14:52:27 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:18.372 14:52:27 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83765 00:26:18.372 killing process with pid 83765 00:26:18.372 14:52:27 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:18.372 14:52:27 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:18.372 14:52:27 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83765' 00:26:18.372 14:52:27 keyring_file -- common/autotest_common.sh@971 -- # kill 83765 00:26:18.372 14:52:27 keyring_file -- common/autotest_common.sh@976 -- # wait 83765 00:26:18.630 00:26:18.630 real 0m12.669s 00:26:18.630 user 0m31.224s 00:26:18.630 sys 0m2.054s 00:26:18.630 14:52:27 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:18.630 ************************************ 00:26:18.630 END TEST keyring_file 00:26:18.630 ************************************ 00:26:18.630 14:52:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:18.630 14:52:27 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:26:18.630 14:52:27 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:26:18.630 14:52:27 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:18.630 14:52:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:18.630 14:52:27 -- common/autotest_common.sh@10 -- # set +x 00:26:18.630 ************************************ 00:26:18.630 START TEST keyring_linux 00:26:18.630 ************************************ 00:26:18.630 14:52:27 keyring_linux -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:26:18.630 Joined session keyring: 904086166 00:26:18.630 * Looking for test storage... 00:26:18.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:26:18.630 14:52:27 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:18.630 14:52:27 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:26:18.630 14:52:27 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:18.889 14:52:27 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@345 -- # : 1 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:18.889 14:52:27 keyring_linux -- scripts/common.sh@368 -- # return 0 00:26:18.889 14:52:27 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:18.889 14:52:27 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:18.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.889 --rc genhtml_branch_coverage=1 00:26:18.889 --rc genhtml_function_coverage=1 00:26:18.889 --rc genhtml_legend=1 00:26:18.889 --rc geninfo_all_blocks=1 00:26:18.889 --rc geninfo_unexecuted_blocks=1 00:26:18.889 00:26:18.889 ' 00:26:18.889 14:52:27 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:18.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.889 --rc genhtml_branch_coverage=1 00:26:18.889 --rc genhtml_function_coverage=1 00:26:18.889 --rc genhtml_legend=1 00:26:18.889 --rc geninfo_all_blocks=1 00:26:18.889 --rc geninfo_unexecuted_blocks=1 00:26:18.889 00:26:18.889 ' 00:26:18.889 14:52:27 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:18.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.889 --rc genhtml_branch_coverage=1 00:26:18.889 --rc genhtml_function_coverage=1 00:26:18.889 --rc genhtml_legend=1 00:26:18.889 --rc geninfo_all_blocks=1 00:26:18.889 --rc geninfo_unexecuted_blocks=1 00:26:18.889 00:26:18.889 ' 00:26:18.889 14:52:27 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:18.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.889 --rc genhtml_branch_coverage=1 00:26:18.889 --rc genhtml_function_coverage=1 00:26:18.889 --rc genhtml_legend=1 00:26:18.889 --rc geninfo_all_blocks=1 00:26:18.889 --rc geninfo_unexecuted_blocks=1 00:26:18.889 00:26:18.889 ' 00:26:18.889 14:52:27 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:26:18.889 14:52:27 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:18.889 14:52:27 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:26:18.889 14:52:27 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:18.889 14:52:27 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:18.889 14:52:27 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:18.889 14:52:27 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:18.889 14:52:27 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:18.889 14:52:27 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:18.889 14:52:27 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:18.889 14:52:27 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:18.889 14:52:27 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:18.889 14:52:27 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:18.889 14:52:27 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0c7d476c-d4d7-4594-a48a-578d93697ffa 00:26:18.889 14:52:27 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=0c7d476c-d4d7-4594-a48a-578d93697ffa 00:26:18.889 14:52:27 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:18.890 14:52:27 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:26:18.890 14:52:27 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:18.890 14:52:27 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:18.890 14:52:27 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:18.890 14:52:27 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.890 14:52:27 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.890 14:52:27 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.890 14:52:27 keyring_linux -- paths/export.sh@5 -- # export PATH 00:26:18.890 14:52:27 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:18.890 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:18.890 14:52:27 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:18.890 14:52:27 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:18.890 14:52:27 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:18.890 14:52:27 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:26:18.890 14:52:27 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:26:18.890 14:52:27 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:26:18.890 14:52:27 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:26:18.890 14:52:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:26:18.890 14:52:27 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:26:18.890 14:52:27 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:18.890 14:52:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:26:18.890 14:52:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:26:18.890 14:52:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@733 -- # python - 00:26:18.890 14:52:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:26:18.890 14:52:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:26:18.890 /tmp/:spdk-test:key0 00:26:18.890 14:52:27 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:26:18.890 14:52:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:26:18.890 14:52:27 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:26:18.890 14:52:27 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:18.890 14:52:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:26:18.890 14:52:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:26:18.890 14:52:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:26:18.890 14:52:27 keyring_linux -- nvmf/common.sh@733 -- # python - 00:26:18.890 14:52:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:26:18.890 14:52:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:26:18.890 /tmp/:spdk-test:key1 00:26:18.890 14:52:27 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:18.890 14:52:27 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=84127 00:26:18.890 14:52:27 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 84127 00:26:18.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.890 14:52:27 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 84127 ']' 00:26:18.890 14:52:27 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.890 14:52:27 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:18.890 14:52:27 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.890 14:52:27 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:18.890 14:52:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:18.890 [2024-11-04 14:52:27.958667] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:26:18.890 [2024-11-04 14:52:27.958727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84127 ] 00:26:19.149 [2024-11-04 14:52:28.095151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.149 [2024-11-04 14:52:28.126765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.149 [2024-11-04 14:52:28.167410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:19.714 14:52:28 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:19.714 14:52:28 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:26:19.714 14:52:28 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:26:19.714 14:52:28 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.714 14:52:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:19.714 [2024-11-04 14:52:28.829178] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:19.714 null0 00:26:19.972 [2024-11-04 14:52:28.861150] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:19.972 [2024-11-04 14:52:28.861278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:19.972 14:52:28 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.972 14:52:28 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:26:19.972 430316010 00:26:19.972 14:52:28 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:26:19.972 782966784 00:26:19.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:19.972 14:52:28 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=84145 00:26:19.972 14:52:28 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 84145 /var/tmp/bperf.sock 00:26:19.972 14:52:28 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 84145 ']' 00:26:19.972 14:52:28 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:19.972 14:52:28 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:19.972 14:52:28 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:26:19.972 14:52:28 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:19.972 14:52:28 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:19.972 14:52:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:19.972 [2024-11-04 14:52:28.924761] Starting SPDK v25.01-pre git sha1 6e713f9c6 / DPDK 24.03.0 initialization... 00:26:19.972 [2024-11-04 14:52:28.924827] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84145 ] 00:26:19.972 [2024-11-04 14:52:29.062502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.972 [2024-11-04 14:52:29.093295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.904 14:52:29 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:20.904 14:52:29 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:26:20.904 14:52:29 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:26:20.904 14:52:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:26:20.904 14:52:29 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:26:20.904 14:52:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:21.161 [2024-11-04 14:52:30.234007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:21.161 14:52:30 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:26:21.161 14:52:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:26:21.418 [2024-11-04 14:52:30.426552] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:21.418 nvme0n1 00:26:21.418 14:52:30 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:26:21.418 14:52:30 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:26:21.418 14:52:30 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:26:21.418 14:52:30 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:26:21.418 14:52:30 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:26:21.418 14:52:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:21.676 14:52:30 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:26:21.676 14:52:30 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:26:21.676 14:52:30 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:26:21.676 14:52:30 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:21.676 14:52:30 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:26:21.676 14:52:30 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:26:21.676 14:52:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:21.934 14:52:30 keyring_linux -- keyring/linux.sh@25 -- # sn=430316010 00:26:21.934 14:52:30 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:26:21.934 14:52:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:26:21.934 14:52:30 keyring_linux -- keyring/linux.sh@26 -- # [[ 430316010 == \4\3\0\3\1\6\0\1\0 ]] 00:26:21.934 14:52:30 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 430316010 00:26:21.934 14:52:30 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:26:21.934 14:52:30 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:21.934 Running I/O for 1 seconds... 00:26:23.308 23489.00 IOPS, 91.75 MiB/s 00:26:23.308 Latency(us) 00:26:23.308 [2024-11-04T14:52:32.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.308 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:23.308 nvme0n1 : 1.01 23488.26 91.75 0.00 0.00 5432.76 4587.52 9779.99 00:26:23.308 [2024-11-04T14:52:32.448Z] =================================================================================================================== 00:26:23.308 [2024-11-04T14:52:32.448Z] Total : 23488.26 91.75 0.00 0.00 5432.76 4587.52 9779.99 00:26:23.308 { 00:26:23.308 "results": [ 00:26:23.308 { 00:26:23.308 "job": "nvme0n1", 00:26:23.308 "core_mask": "0x2", 00:26:23.308 "workload": "randread", 00:26:23.308 "status": "finished", 00:26:23.308 "queue_depth": 128, 00:26:23.308 "io_size": 4096, 00:26:23.308 "runtime": 1.005481, 00:26:23.308 "iops": 23488.260842323227, 00:26:23.308 "mibps": 91.7510189153251, 00:26:23.308 "io_failed": 0, 00:26:23.308 "io_timeout": 0, 00:26:23.308 "avg_latency_us": 5432.7591725647435, 00:26:23.308 "min_latency_us": 4587.52, 00:26:23.308 "max_latency_us": 9779.987692307692 00:26:23.308 } 00:26:23.308 ], 00:26:23.308 "core_count": 1 00:26:23.308 } 00:26:23.308 14:52:32 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:23.308 14:52:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:23.308 14:52:32 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:26:23.308 14:52:32 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:26:23.308 14:52:32 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:26:23.308 14:52:32 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:26:23.308 14:52:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:23.308 14:52:32 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:26:23.308 14:52:32 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:26:23.308 14:52:32 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:26:23.308 14:52:32 keyring_linux -- keyring/linux.sh@23 -- # return 00:26:23.308 14:52:32 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:23.308 14:52:32 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:26:23.308 14:52:32 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:23.308 14:52:32 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:26:23.308 14:52:32 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:23.308 14:52:32 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:26:23.308 14:52:32 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:23.308 14:52:32 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:23.308 14:52:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:23.566 [2024-11-04 14:52:32.583462] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:23.566 [2024-11-04 14:52:32.584240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12405d0 (107): Transport endpoint is not connected 00:26:23.566 [2024-11-04 14:52:32.585228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12405d0 (9): Bad file descriptor 00:26:23.566 [2024-11-04 14:52:32.586224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:26:23.566 [2024-11-04 14:52:32.586335] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:26:23.566 [2024-11-04 14:52:32.586380] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:26:23.566 [2024-11-04 14:52:32.586423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:26:23.566 request: 00:26:23.566 { 00:26:23.566 "name": "nvme0", 00:26:23.566 "trtype": "tcp", 00:26:23.566 "traddr": "127.0.0.1", 00:26:23.566 "adrfam": "ipv4", 00:26:23.566 "trsvcid": "4420", 00:26:23.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.566 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:23.566 "prchk_reftag": false, 00:26:23.566 "prchk_guard": false, 00:26:23.566 "hdgst": false, 00:26:23.566 "ddgst": false, 00:26:23.566 "psk": ":spdk-test:key1", 00:26:23.566 "allow_unrecognized_csi": false, 00:26:23.566 "method": "bdev_nvme_attach_controller", 00:26:23.566 "req_id": 1 00:26:23.566 } 00:26:23.566 Got JSON-RPC error response 00:26:23.566 response: 00:26:23.566 { 00:26:23.566 "code": -5, 00:26:23.566 "message": "Input/output error" 00:26:23.566 } 00:26:23.566 14:52:32 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:26:23.566 14:52:32 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:23.566 14:52:32 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:23.566 14:52:32 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:23.566 14:52:32 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:26:23.566 14:52:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:26:23.566 14:52:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:26:23.566 14:52:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:26:23.566 14:52:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:26:23.566 14:52:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:26:23.566 14:52:32 keyring_linux -- keyring/linux.sh@33 -- # sn=430316010 00:26:23.566 14:52:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 430316010 00:26:23.566 1 links removed 00:26:23.566 14:52:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:26:23.566 14:52:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:26:23.566 14:52:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:26:23.566 14:52:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:26:23.566 14:52:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:26:23.566 14:52:32 keyring_linux -- keyring/linux.sh@33 -- # sn=782966784 00:26:23.566 14:52:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 782966784 00:26:23.566 1 links removed 00:26:23.566 14:52:32 keyring_linux -- keyring/linux.sh@41 -- # killprocess 84145 00:26:23.566 14:52:32 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 84145 ']' 00:26:23.566 14:52:32 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 84145 00:26:23.566 14:52:32 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:26:23.566 14:52:32 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:23.566 14:52:32 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84145 00:26:23.566 killing process with pid 84145 00:26:23.566 Received shutdown signal, test time was about 1.000000 seconds 00:26:23.566 00:26:23.566 Latency(us) 00:26:23.566 [2024-11-04T14:52:32.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.566 [2024-11-04T14:52:32.706Z] =================================================================================================================== 00:26:23.566 [2024-11-04T14:52:32.706Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:23.566 14:52:32 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:23.567 14:52:32 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:23.567 14:52:32 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84145' 00:26:23.567 14:52:32 keyring_linux -- common/autotest_common.sh@971 -- # kill 84145 00:26:23.567 14:52:32 keyring_linux -- common/autotest_common.sh@976 -- # wait 84145 00:26:23.825 14:52:32 keyring_linux -- keyring/linux.sh@42 -- # killprocess 84127 00:26:23.825 14:52:32 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 84127 ']' 00:26:23.825 14:52:32 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 84127 00:26:23.825 14:52:32 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:26:23.825 14:52:32 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:23.825 14:52:32 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84127 00:26:23.825 killing process with pid 84127 00:26:23.825 14:52:32 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:23.825 14:52:32 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:23.825 14:52:32 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84127' 00:26:23.825 14:52:32 keyring_linux -- common/autotest_common.sh@971 -- # kill 84127 00:26:23.825 14:52:32 keyring_linux -- common/autotest_common.sh@976 -- # wait 84127 00:26:23.825 00:26:23.825 real 0m5.273s 00:26:23.825 user 0m10.134s 00:26:23.825 sys 0m1.132s 00:26:23.825 14:52:32 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:23.825 ************************************ 00:26:23.825 END TEST keyring_linux 00:26:23.825 ************************************ 00:26:23.825 14:52:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:24.085 14:52:32 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:26:24.085 14:52:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:24.085 14:52:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:26:24.085 14:52:32 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:26:24.085 14:52:32 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:26:24.085 14:52:32 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:26:24.085 14:52:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:26:24.085 14:52:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:24.085 14:52:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:24.085 14:52:32 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:26:24.085 14:52:32 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:24.085 14:52:32 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:26:24.085 14:52:32 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:24.085 14:52:32 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:24.085 14:52:32 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:26:24.085 14:52:32 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:26:24.085 14:52:32 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:26:24.085 14:52:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:24.085 14:52:32 -- common/autotest_common.sh@10 -- # set +x 00:26:24.085 14:52:32 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:26:24.085 14:52:32 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:26:24.085 14:52:32 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:26:24.085 14:52:32 -- common/autotest_common.sh@10 -- # set +x 00:26:25.472 INFO: APP EXITING 00:26:25.472 INFO: killing all VMs 00:26:25.472 INFO: killing vhost app 00:26:25.472 INFO: EXIT DONE 00:26:25.730 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:25.730 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:26:25.987 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:26:26.245 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:26.505 Cleaning 00:26:26.505 Removing: /var/run/dpdk/spdk0/config 00:26:26.505 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:26.505 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:26.505 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:26.505 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:26.505 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:26.505 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:26.505 Removing: /var/run/dpdk/spdk1/config 00:26:26.505 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:26.505 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:26.505 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:26.505 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:26.505 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:26.506 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:26.506 Removing: /var/run/dpdk/spdk2/config 00:26:26.506 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:26.506 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:26.506 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:26.506 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:26.506 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:26.506 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:26.506 Removing: /var/run/dpdk/spdk3/config 00:26:26.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:26.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:26.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:26.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:26.506 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:26.506 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:26.506 Removing: /var/run/dpdk/spdk4/config 00:26:26.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:26.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:26.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:26.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:26.506 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:26.506 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:26.506 Removing: /dev/shm/nvmf_trace.0 00:26:26.506 Removing: /dev/shm/spdk_tgt_trace.pid56142 00:26:26.506 Removing: /var/run/dpdk/spdk0 00:26:26.506 Removing: /var/run/dpdk/spdk1 00:26:26.506 Removing: /var/run/dpdk/spdk2 00:26:26.506 Removing: /var/run/dpdk/spdk3 00:26:26.506 Removing: /var/run/dpdk/spdk4 00:26:26.506 Removing: /var/run/dpdk/spdk_pid55989 00:26:26.506 Removing: /var/run/dpdk/spdk_pid56142 00:26:26.506 Removing: /var/run/dpdk/spdk_pid56330 00:26:26.506 Removing: /var/run/dpdk/spdk_pid56415 00:26:26.506 Removing: /var/run/dpdk/spdk_pid56431 00:26:26.506 Removing: /var/run/dpdk/spdk_pid56540 00:26:26.506 Removing: /var/run/dpdk/spdk_pid56553 00:26:26.506 Removing: /var/run/dpdk/spdk_pid56687 00:26:26.506 Removing: /var/run/dpdk/spdk_pid56877 00:26:26.506 Removing: /var/run/dpdk/spdk_pid57024 00:26:26.506 Removing: /var/run/dpdk/spdk_pid57098 00:26:26.506 Removing: /var/run/dpdk/spdk_pid57169 00:26:26.506 Removing: /var/run/dpdk/spdk_pid57262 00:26:26.506 Removing: /var/run/dpdk/spdk_pid57342 00:26:26.506 Removing: /var/run/dpdk/spdk_pid57375 00:26:26.506 Removing: /var/run/dpdk/spdk_pid57406 00:26:26.506 Removing: /var/run/dpdk/spdk_pid57474 00:26:26.506 Removing: /var/run/dpdk/spdk_pid57539 00:26:26.506 Removing: /var/run/dpdk/spdk_pid57954 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58002 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58042 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58058 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58114 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58130 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58186 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58202 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58242 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58260 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58300 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58307 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58432 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58473 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58550 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58879 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58896 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58926 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58935 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58951 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58970 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58983 00:26:26.506 Removing: /var/run/dpdk/spdk_pid58993 00:26:26.506 Removing: /var/run/dpdk/spdk_pid59012 00:26:26.506 Removing: /var/run/dpdk/spdk_pid59030 00:26:26.506 Removing: /var/run/dpdk/spdk_pid59041 00:26:26.506 Removing: /var/run/dpdk/spdk_pid59060 00:26:26.506 Removing: /var/run/dpdk/spdk_pid59068 00:26:26.506 Removing: /var/run/dpdk/spdk_pid59089 00:26:26.506 Removing: /var/run/dpdk/spdk_pid59107 00:26:26.506 Removing: /var/run/dpdk/spdk_pid59116 00:26:26.506 Removing: /var/run/dpdk/spdk_pid59132 00:26:26.506 Removing: /var/run/dpdk/spdk_pid59145 00:26:26.506 Removing: /var/run/dpdk/spdk_pid59163 00:26:26.506 Removing: /var/run/dpdk/spdk_pid59174 00:26:26.506 Removing: /var/run/dpdk/spdk_pid59205 00:26:26.506 Removing: /var/run/dpdk/spdk_pid59218 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59248 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59314 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59343 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59352 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59375 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59385 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59392 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59429 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59448 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59471 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59481 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59490 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59494 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59504 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59513 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59517 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59532 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59555 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59582 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59591 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59614 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59624 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59631 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59672 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59683 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59704 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59712 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59719 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59727 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59733 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59736 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59744 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59751 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59828 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59870 00:26:26.766 Removing: /var/run/dpdk/spdk_pid59982 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60014 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60050 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60070 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60081 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60101 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60132 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60148 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60220 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60236 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60275 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60336 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60381 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60410 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60508 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60551 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60583 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60810 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60902 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60925 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60947 00:26:26.766 Removing: /var/run/dpdk/spdk_pid60988 00:26:26.766 Removing: /var/run/dpdk/spdk_pid61015 00:26:26.766 Removing: /var/run/dpdk/spdk_pid61056 00:26:26.766 Removing: /var/run/dpdk/spdk_pid61081 00:26:26.766 Removing: /var/run/dpdk/spdk_pid61463 00:26:26.766 Removing: /var/run/dpdk/spdk_pid61503 00:26:26.766 Removing: /var/run/dpdk/spdk_pid61839 00:26:26.766 Removing: /var/run/dpdk/spdk_pid62303 00:26:26.766 Removing: /var/run/dpdk/spdk_pid62568 00:26:26.766 Removing: /var/run/dpdk/spdk_pid63426 00:26:26.766 Removing: /var/run/dpdk/spdk_pid64346 00:26:26.766 Removing: /var/run/dpdk/spdk_pid64468 00:26:26.766 Removing: /var/run/dpdk/spdk_pid64531 00:26:26.766 Removing: /var/run/dpdk/spdk_pid65933 00:26:26.766 Removing: /var/run/dpdk/spdk_pid66238 00:26:26.766 Removing: /var/run/dpdk/spdk_pid69623 00:26:26.766 Removing: /var/run/dpdk/spdk_pid69966 00:26:26.766 Removing: /var/run/dpdk/spdk_pid70085 00:26:26.766 Removing: /var/run/dpdk/spdk_pid70219 00:26:26.766 Removing: /var/run/dpdk/spdk_pid70235 00:26:26.766 Removing: /var/run/dpdk/spdk_pid70263 00:26:26.766 Removing: /var/run/dpdk/spdk_pid70292 00:26:26.766 Removing: /var/run/dpdk/spdk_pid70380 00:26:26.766 Removing: /var/run/dpdk/spdk_pid70516 00:26:26.766 Removing: /var/run/dpdk/spdk_pid70649 00:26:26.766 Removing: /var/run/dpdk/spdk_pid70725 00:26:26.766 Removing: /var/run/dpdk/spdk_pid70912 00:26:26.766 Removing: /var/run/dpdk/spdk_pid70992 00:26:26.766 Removing: /var/run/dpdk/spdk_pid71066 00:26:26.766 Removing: /var/run/dpdk/spdk_pid71417 00:26:26.766 Removing: /var/run/dpdk/spdk_pid71837 00:26:26.766 Removing: /var/run/dpdk/spdk_pid71838 00:26:26.766 Removing: /var/run/dpdk/spdk_pid71839 00:26:26.766 Removing: /var/run/dpdk/spdk_pid72098 00:26:26.766 Removing: /var/run/dpdk/spdk_pid72351 00:26:26.766 Removing: /var/run/dpdk/spdk_pid72731 00:26:26.766 Removing: /var/run/dpdk/spdk_pid72740 00:26:26.766 Removing: /var/run/dpdk/spdk_pid73061 00:26:26.766 Removing: /var/run/dpdk/spdk_pid73075 00:26:26.766 Removing: /var/run/dpdk/spdk_pid73089 00:26:26.767 Removing: /var/run/dpdk/spdk_pid73120 00:26:26.767 Removing: /var/run/dpdk/spdk_pid73130 00:26:26.767 Removing: /var/run/dpdk/spdk_pid73474 00:26:26.767 Removing: /var/run/dpdk/spdk_pid73527 00:26:26.767 Removing: /var/run/dpdk/spdk_pid73852 00:26:26.767 Removing: /var/run/dpdk/spdk_pid74049 00:26:26.767 Removing: /var/run/dpdk/spdk_pid74477 00:26:26.767 Removing: /var/run/dpdk/spdk_pid75022 00:26:26.767 Removing: /var/run/dpdk/spdk_pid75850 00:26:27.028 Removing: /var/run/dpdk/spdk_pid76479 00:26:27.028 Removing: /var/run/dpdk/spdk_pid76486 00:26:27.028 Removing: /var/run/dpdk/spdk_pid78462 00:26:27.028 Removing: /var/run/dpdk/spdk_pid78522 00:26:27.028 Removing: /var/run/dpdk/spdk_pid78577 00:26:27.028 Removing: /var/run/dpdk/spdk_pid78638 00:26:27.028 Removing: /var/run/dpdk/spdk_pid78748 00:26:27.028 Removing: /var/run/dpdk/spdk_pid78802 00:26:27.028 Removing: /var/run/dpdk/spdk_pid78849 00:26:27.028 Removing: /var/run/dpdk/spdk_pid78904 00:26:27.028 Removing: /var/run/dpdk/spdk_pid79259 00:26:27.028 Removing: /var/run/dpdk/spdk_pid80476 00:26:27.028 Removing: /var/run/dpdk/spdk_pid80622 00:26:27.028 Removing: /var/run/dpdk/spdk_pid80864 00:26:27.028 Removing: /var/run/dpdk/spdk_pid81476 00:26:27.028 Removing: /var/run/dpdk/spdk_pid81631 00:26:27.028 Removing: /var/run/dpdk/spdk_pid81793 00:26:27.028 Removing: /var/run/dpdk/spdk_pid81890 00:26:27.028 Removing: /var/run/dpdk/spdk_pid82066 00:26:27.028 Removing: /var/run/dpdk/spdk_pid82175 00:26:27.028 Removing: /var/run/dpdk/spdk_pid82888 00:26:27.028 Removing: /var/run/dpdk/spdk_pid82922 00:26:27.028 Removing: /var/run/dpdk/spdk_pid82953 00:26:27.028 Removing: /var/run/dpdk/spdk_pid83207 00:26:27.028 Removing: /var/run/dpdk/spdk_pid83248 00:26:27.028 Removing: /var/run/dpdk/spdk_pid83284 00:26:27.028 Removing: /var/run/dpdk/spdk_pid83765 00:26:27.028 Removing: /var/run/dpdk/spdk_pid83778 00:26:27.028 Removing: /var/run/dpdk/spdk_pid84011 00:26:27.028 Removing: /var/run/dpdk/spdk_pid84127 00:26:27.028 Removing: /var/run/dpdk/spdk_pid84145 00:26:27.028 Clean 00:26:27.028 14:52:36 -- common/autotest_common.sh@1451 -- # return 0 00:26:27.028 14:52:36 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:26:27.028 14:52:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:27.028 14:52:36 -- common/autotest_common.sh@10 -- # set +x 00:26:27.028 14:52:36 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:26:27.028 14:52:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:27.028 14:52:36 -- common/autotest_common.sh@10 -- # set +x 00:26:27.028 14:52:36 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:27.028 14:52:36 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:27.028 14:52:36 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:27.028 14:52:36 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:26:27.028 14:52:36 -- spdk/autotest.sh@394 -- # hostname 00:26:27.028 14:52:36 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:27.289 geninfo: WARNING: invalid characters removed from testname! 00:26:53.816 14:52:58 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:53.816 14:53:01 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:54.106 14:53:03 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:56.632 14:53:05 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:58.005 14:53:06 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:59.904 14:53:08 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:02.430 14:53:10 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:02.430 14:53:11 -- spdk/autorun.sh@1 -- $ timing_finish 00:27:02.430 14:53:11 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:27:02.430 14:53:11 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:02.430 14:53:11 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:27:02.430 14:53:11 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:02.430 + [[ -n 5002 ]] 00:27:02.430 + sudo kill 5002 00:27:02.437 [Pipeline] } 00:27:02.452 [Pipeline] // timeout 00:27:02.457 [Pipeline] } 00:27:02.471 [Pipeline] // stage 00:27:02.475 [Pipeline] } 00:27:02.490 [Pipeline] // catchError 00:27:02.498 [Pipeline] stage 00:27:02.500 [Pipeline] { (Stop VM) 00:27:02.511 [Pipeline] sh 00:27:02.789 + vagrant halt 00:27:05.317 ==> default: Halting domain... 00:27:08.607 [Pipeline] sh 00:27:08.881 + vagrant destroy -f 00:27:11.405 ==> default: Removing domain... 00:27:11.416 [Pipeline] sh 00:27:11.692 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:27:11.700 [Pipeline] } 00:27:11.714 [Pipeline] // stage 00:27:11.719 [Pipeline] } 00:27:11.732 [Pipeline] // dir 00:27:11.737 [Pipeline] } 00:27:11.749 [Pipeline] // wrap 00:27:11.754 [Pipeline] } 00:27:11.766 [Pipeline] // catchError 00:27:11.774 [Pipeline] stage 00:27:11.776 [Pipeline] { (Epilogue) 00:27:11.788 [Pipeline] sh 00:27:12.075 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:17.337 [Pipeline] catchError 00:27:17.339 [Pipeline] { 00:27:17.351 [Pipeline] sh 00:27:17.625 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:17.625 Artifacts sizes are good 00:27:17.632 [Pipeline] } 00:27:17.646 [Pipeline] // catchError 00:27:17.655 [Pipeline] archiveArtifacts 00:27:17.661 Archiving artifacts 00:27:17.771 [Pipeline] cleanWs 00:27:17.781 [WS-CLEANUP] Deleting project workspace... 00:27:17.781 [WS-CLEANUP] Deferred wipeout is used... 00:27:17.786 [WS-CLEANUP] done 00:27:17.788 [Pipeline] } 00:27:17.802 [Pipeline] // stage 00:27:17.807 [Pipeline] } 00:27:17.820 [Pipeline] // node 00:27:17.825 [Pipeline] End of Pipeline 00:27:17.861 Finished: SUCCESS